Project description:Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults' ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger) than the exact answer and when it was far (vs. close) from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner.
Project description:Simulating quantum circuits using classical computers lets us analyse the inner workings of quantum algorithms. The most complete type of simulation, strong simulation, is believed to be generally inefficient. Nevertheless, several efficient strong simulation techniques are known for restricted families of quantum circuits and we develop an additional technique in this article. Further, we show that strong simulation algorithms perform another fundamental task: solving search problems. Efficient strong simulation techniques allow solutions to a class of search problems to be counted and found efficiently. This enhances the utility of strong simulation methods, known or yet to be discovered, and extends the class of search problems known to be efficiently simulable. Relating strong simulation to search problems also bounds the computational power of efficiently strongly simulable circuits; if they could solve all problems in P this would imply that all problems in NP and #P could be solved in polynomial time.
Project description:An independent set (IS) is a set of vertices in a graph such that no edge connects any two vertices. In adiabatic quantum computation [E. Farhi, et al., Science 292, 472-475 (2001); A. Das, B. K. Chakrabarti, Rev. Mod. Phys. 80, 1061-1081 (2008)], a given graph G(V, E) can be naturally mapped onto a many-body Hamiltonian [Formula: see text], with edges [Formula: see text] being the two-body interactions between adjacent vertices [Formula: see text]. Thus, solving the IS problem is equivalent to finding all the computational basis ground states of [Formula: see text]. Very recently, non-Abelian adiabatic mixing (NAAM) has been proposed to address this task, exploiting an emergent non-Abelian gauge symmetry of [Formula: see text] [B. Wu, H. Yu, F. Wilczek, Phys. Rev. A 101, 012318 (2020)]. Here, we solve a representative IS problem [Formula: see text] by simulating the NAAM digitally using a linear optical quantum network, consisting of three C-Phase gates, four deterministic two-qubit gate arrays (DGA), and ten single rotation gates. The maximum IS has been successfully identified with sufficient Trotterization steps and a carefully chosen evolution path. Remarkably, we find IS with a total probability of 0.875(16), among which the nontrivial ones have a considerable weight of about 31.4%. Our experiment demonstrates the potential advantage of NAAM for solving IS-equivalent problems.
Project description:There is strong evidence that children show selectivity in their reliance on others as sources of information, but the findings to date have largely been limited to contexts that involve factual information. The current experiments were designed to determine whether children might also show selectivity in their choice of sources within a problem-solving context. Children in two age groups (20-24 months and 30-36 months, total N=60) were presented with a series of conceptually difficult problem-solving tasks and were given an opportunity to interact with adult experimenters who were depicted as either good helpers or bad helpers. Participants in both age groups preferred to seek help from the good helpers. The findings suggest that even young children evaluate others with reference to their potential to provide help and use this information to guide their behavioral choices.
Project description:Molecular docking is a hard optimization problem that has been tackled in the past with metaheuristics, demonstrating new and challenging results when looking for one objective: the minimum binding energy. However, only a few papers can be found in the literature that deal with this problem by means of a multi-objective approach, and no experimental comparisons have been made in order to clarify which of them has the best overall performance. In this paper, we use and compare, for the first time, a set of representative multi-objective optimization algorithms applied to solve complex molecular docking problems. The approach followed is focused on optimizing the intermolecular and intramolecular energies as two main objectives to minimize. Specifically, these algorithms are: two variants of the non-dominated sorting genetic algorithm II (NSGA-II), speed modulation multi-objective particle swarm optimization (SMPSO), third evolution step of generalized differential evolution (GDE3), multi-objective evolutionary algorithm based on decomposition (MOEA/D) and S-metric evolutionary multi-objective optimization (SMS-EMOA). We assess the performance of the algorithms by applying quality indicators intended to measure convergence and the diversity of the generated Pareto front approximations. We carry out a comparison with another reference mono-objective algorithm in the problem domain (Lamarckian genetic algorithm (LGA) provided by the AutoDock tool). Furthermore, the ligand binding site and molecular interactions of computed solutions are analyzed, showing promising results for the multi-objective approaches. In addition, a case study of application for aeroplysinin-1 is performed, showing the effectiveness of our multi-objective approach in drug discovery.
Project description:With growing interest in promoting skills related to the scientific process, we studied performance in solving ill-defined problems demonstrated by graduating biochemistry majors at a public, minority-serving university. As adoption of techniques for facilitating the attainment of higher-order learning objectives broadens, so too does the need to appropriately measure and understand student performance. We extended previous validation of the Individual Problem Solving Assessment (IPSA) and administered multiple versions of the IPSA across two semesters of biochemistry courses. A final version was taken by majors just before program exit, and student responses on that version were analyzed both quantitatively and qualitatively. This mixed-methods study quantifies student performance in scientific problem solving, while probing the qualitative nature of unsatisfactory solutions. Of the five domains measured by the IPSA, we found that average graduates were only successful in two areas: evaluating given experimental data to state results and reflecting on performance after the solution to the problem was provided. The primary difficulties in each domain were quite different. The most widespread challenge for students was to design an investigation that rationally aligned with a given hypothesis. We also extend the findings into pedagogical recommendations.
Project description:Gas leakage during minimally invasive surgery is an aerosolization hazard. Sensitive optical and thermographic imaging can demonstrate and differentiate between mechanistic categories, enabling engineering solutions to fortify surgical care against pollutants and pathogens affecting operating room teams. Areas for improvement.
Project description:Network of neurons in the brain apply-unlike processors in our current generation of computer hardware-an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling.
Project description:Quantum ground-state problems are computationally hard problems for general many-body Hamiltonians; there is no classical or quantum algorithm known to be able to solve them efficiently. Nevertheless, if a trial wavefunction approximating the ground state is available, as often happens for many problems in physics and chemistry, a quantum computer could employ this trial wavefunction to project the ground state by means of the phase estimation algorithm (PEA). We performed an experimental realization of this idea by implementing a variational-wavefunction approach to solve the ground-state problem of the Heisenberg spin model with an NMR quantum simulator. Our iterative phase estimation procedure yields a high accuracy for the eigenenergies (to the 10?? decimal digit). The ground-state fidelity was distilled to be more than 80%, and the singlet-to-triplet switching near the critical field is reliably captured. This result shows that quantum simulators can better leverage classical trial wave functions than classical computers.
Project description:Constraint satisfaction problems are ubiquitous in many domains. They are typically solved using conventional digital computing architectures that do not reflect the distributed nature of many of these problems, and are thus ill-suited for solving them. Here we present a parallel analogue/digital hardware architecture specifically designed to solve such problems. We cast constraint satisfaction problems as networks of stereotyped nodes that communicate using digital pulses, or events. Each node contains an oscillator implemented using analogue circuits. The non-repeating phase relations among the oscillators drive the exploration of the solution space. We show that this hardware architecture can yield state-of-the-art performance on random SAT problems under reasonable assumptions on the implementation. We present measurements from a prototype electronic chip to demonstrate that a physical implementation of the proposed architecture is robust to practical non-idealities and to validate the theory proposed.