Project description:The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy.
Project description:The prediction of imminent bankruptcy for a company is important to banks, government agencies, business owners, and different business stakeholders. Bankruptcy is influenced by many global and local aspects, so it can hardly be anticipated without deeper analysis and economic modeling knowledge. To make this problem even more challenging, the available bankruptcy datasets are usually imbalanced since even in times of financial crisis, bankrupt companies constitute only a fraction of all operating businesses. In this article, we propose a novel bankruptcy prediction approach based on a shallow autoencoder ensemble that is optimized by a genetic algorithm. The goal of the autoencoders is to learn the distribution of the majority class: going concern businesses. Then, the bankrupt companies are represented by higher autoencoder reconstruction errors. The choice of the optimal threshold value for the reconstruction error, which is used to differentiate between bankrupt and nonbankrupt companies, is crucial and determines the final classification decision. In our approach, the threshold for each autoencoder is determined by a genetic algorithm. We evaluate the proposed method on four different datasets containing small and medium-sized enterprises. The results show that the autoencoder ensemble is able to identify bankrupt companies with geometric mean scores ranging from 71% to 93.7%, (depending on the industry and evaluation year).
Project description:BackgroundAfter embryonic development, Caenorhabditis elegans progress through for larval stages, each of them finishing with molting. The repetitive nature of C. elegans postembryonic development is considered an oscillatory process, a concept that has gained traction from regulation by a circadian clock gene homologue. Nevertheless, each larval stage has a defined duration and entails specific events. Since the overall duration of development is controlled by numerous factors, we have asked whether different rate-limiting interventions impact all stages equally.ResultsWe have measured the duration of each stage of development for over 2500 larvae, under varied environmental conditions known to alter overall developmental rate. We applied changes in temperature and in the quantity and quality of nutrition and analysed the effect of genetically reduced insulin signalling. Our results show that the distinct developmental stages respond differently to these perturbations. The changes in the duration of specific larval stages seem to depend on stage-specific events. Furthermore, our high-resolution measurement of the effect of temperature on the stage-specific duration of development has unveiled novel features of temperature dependence in C. elegans postembryonic development.ConclusionsAltogether, our results show that multiple factors fine tune developmental timing, impacting larval stages independently. Further understanding of the regulation of this process will allow modelling the mechanisms that control developmental timing.
Project description:With urban population increasing dramatically worldwide, cities are playing an increasingly critical role in human societies and the sustainability of the planet. An obstacle to effective policy is the lack of meaningful urban metrics based on a quantitative understanding of cities. Typically, linear per capita indicators are used to characterize and rank cities. However, these implicitly ignore the fundamental role of nonlinear agglomeration integral to the life history of cities. As such, per capita indicators conflate general nonlinear effects, common to all cities, with local dynamics, specific to each city, failing to provide direct measures of the impact of local events and policy. Agglomeration nonlinearities are explicitly manifested by the superlinear power law scaling of most urban socioeconomic indicators with population size, all with similar exponents (1.15). As a result larger cities are disproportionally the centers of innovation, wealth and crime, all to approximately the same degree. We use these general urban laws to develop new urban metrics that disentangle dynamics at different scales and provide true measures of local urban performance. New rankings of cities and a novel and simpler perspective on urban systems emerge. We find that local urban dynamics display long-term memory, so cities under or outperforming their size expectation maintain such (dis)advantage for decades. Spatiotemporal correlation analyses reveal a novel functional taxonomy of U.S. metropolitan areas that is generally not organized geographically but based instead on common local economic models, innovation strategies and patterns of crime.
Project description:The aim of this article is to answer the question whether the unreliability of the Altman bankruptcy prediction model may be caused by manipulations in financial statements. Our study was carried out on a group of 369 bankrupt Polish companies, with the research period covering the years 2011-2020. In the study, we divided the companies into two groups: those correctly classified by Altman's model as at risk of bankruptcy, and companies for which the model did not indicate a significant bankruptcy risk. Using a logit model, we tested whether the probability of companies being correctly classified as failed depends on the risk of a manipulation of financial statements. We use Benford's law to measure the risk of a manipulation of financial statements. We also repeated our study using panel data models. Our analyses show that the manipulation of financial statements is not the cause of the inaccurate predictions of the Altman model. On the contrary, the results of the analyses indicate that manipulations occurs for companies with a lower Z-score and therefore a worse financial situation. This means that a deterioration in the quality of financial statements can be a signal of an increasing probability of bankruptcy.
Project description:Population-normalized indicators (e.g., GDP per capita), under the assumption of the indicators scaling linearly with population, are ubiquitously used in national development performance comparison. This assumption, however, is not valid because it may ignore agglomeration effect resulting from nonlinear interactions in socioeconomic systems. Here, we present extensive empirical evidence showing the sub-linear scaling rather than the presumed linear scaling between population and multiple indicators of national development performance. We then develop a theoretical framework based on the scaling rule observed in cities to explore the origin of scaling in countries. Finally, we demonstrate that urbanization plays a pivotal role in transforming national development from limited sub-linear growth to unlimited super-linear growth. This underscores the significance of urbanization in achieving sustained growth and elevating human living standards at the national level. Our findings have the potential to inform policies aimed at promoting equitable inter-country comparison and achieving sustainable development in countries.
Project description:There is a paucity in world literature of a prospective study on post cataract strabismus and in Indian literature on post cataract ptosis. 150 cataract patients without pre-existing strabismus or ptosis were subjected to standard extracapsular cataract extraction with posterior chamber intraocular lens implantation under 2 point peribulbar anaesthesia and were post-operatively evaluated for strabismus and ptosis. At the end of first week, there were 10/150 (6.67%) cases of strabismus, 13/150 (8.67%) cases of ptosis and 5/150 (3.33%) of both combined, which reduced to 2% each (3/150) at the twelfth week. The probable factors for causation and recovery are being discussed.
Project description:Drug combinations are required to treat advanced cancers and other complex diseases. Compared with monotherapy, combination treatments can enhance efficacy and reduce toxicity by lowering the doses of single drugs-and there especially synergistic combinations are of interest. Since drug combination screening experiments are costly and time-consuming, reliable machine learning models are needed for prioritizing potential combinations for further studies. Most of the current machine learning models are based on scalar-valued approaches, which predict individual response values or synergy scores for drug combinations. We take a functional output prediction approach, in which full, continuous dose-response combination surfaces are predicted for each drug combination on the cell lines. We investigate the predictive power of the recently proposed comboKR method, which is based on a powerful input-output kernel regression technique and functional modeling of the response surface. In this work, we develop a scaled-up formulation of the comboKR, which also implements improved modeling choices: we (1) incorporate new modeling choices for the output drug combination response surfaces to the comboKR framework, and (2) propose a projected gradient descent method to solve the challenging pre-image problem that is traditionally solved with simple candidate set approaches. We provide thorough experimental analysis of comboKR 2.0 with three real-word datasets within various challenging experimental settings, including cases where drugs or cell lines have not been encountered in the training data. Our comparison with synergy score prediction methods further highlights the relevance of dose-response prediction approaches, instead of relying on simple scoring methods.
Project description:Cities exhibit consistent returns to scale in economic outputs, and urban scaling analysis is widely adopted to uncover common mechanisms in cities' socioeconomic productivity. Leading theories view cities as closed systems, with returns to scale arising from intra-city social interactions. Here, we argue that the interactions between cities, particularly via shared organizations such as firms, significantly influence a city's economic output. By examining global data on city connectivity through multinational firms alongside urban scaling Gross Domestic Product (GDP) statistics from the United States, EU, and China, we establish that global connectivity notably enhances GDP, while controlling for population. After accounting for global connectivity, the effect of population on GDP is no longer distinguishable from linear. To differentiate between local and global mechanisms, we analyzed homicide case data, anticipating dominant local effects. As expected, inter-city connectivity showed no significant impact. Our research highlights that inter-city effects affect some urban outputs more than others. This empirical analysis lays the groundwork for incorporating inter-city organizational connections into urban scaling theories and could inform future model development.
Project description:The institutions of science are in a state of flux. Declining public funding for basic science, the increasingly corporatized administration of universities, increasing "adjunctification" of the professoriate and poor academic career prospects for postdoctoral scientists indicate a significant mismatch between the reality of the market economy and expectations in higher education for science. Solutions to these issues typically revolve around the idea of fixing the career "pipeline", which is envisioned as being a pathway from higher-education training to a coveted permanent position, and then up a career ladder until retirement. In this paper, we propose and describe the term "ecosystem" as a more appropriate way to conceptualize today's scientific training and the professional landscape of the scientific enterprise. First, we highlight the issues around the concept of "fixing the pipeline". Then, we articulate our ecosystem metaphor by describing a series of concrete design patterns that draw on peer-to-peer, decentralized, cooperative, and commons-based approaches for creating a new dynamic scientific enterprise.