Computer simulations
What are they used for?
Problem-solving: they help to solve problems that may be analytically intractable. For example, finding solutions to the Travelling salesman problem.
Explaining phenomena: we can find hypotheses to answer a question like “why \(x\)” by trying to reconstruct how \(x\) came to be. For example, neuron simulations help explain some activation patterns found in these systems.
Visualizing phenomena: simulations permit us to reconstruct phenomena in visual ways as a form of increasing understanding. For example, this is traditionally sough in the area of fluid simulation.
Predicting phenomena: for example, we can simulate a market to predict its dynamics in the future, or simulate the spreading of a disease in order to be prepared for different scenarios.
Explore different possibilities: simulations don’t require a “material” setup for experimentation and thus can be cheaper, faster and more controllable (i.e., you can “pause” time to observe certain features)
- On the other hand, though, we need to be sure we are representing reality in an accurate way…
Verifying simulations
How can these simulation fail?
Truncation and rounding off numbers: too large or too small inputs may generate overflow and, therefore, imprecise results
Calibration: are our input values accurate enough? If they aren’t, our calculations may even amplify innacuracies (something explored in measurement theory)
- Or even in use of constants: pi, avogadro’s number, gravitational constant… how much precision do we need?
Hardware issues
- For example, without memory and processing power, we can’t achieve too much precision. Our calculations may be truncated or take too much time to run.
- Soft errors: damage or electromagnetic interference in the hardware can really mess things up!
- For example, Intel has been systematically working towards incorporating a built-in cosmic ray detector into their chips. The detector would either spot cosmic ray hits on nearby circuits, or directly on the detector itself. When triggered, it activates a series of error-checking circuits.
- Design errors: for example, the (in)famous Pentium FDIV bug would calculate numbers with less precision than the correct numbers, and costed Intel Co. a loss of about $500 million in revenue with the replacement of the flawed processors.
- All these considerations are very important in the design of High-Performance Computing applications.
Module and library dependency
- For example, an error found around 2016 on a neuroscience software suggested that many publications could be invalid. Everyone was using the same software and wouldn’t pay attention to that!
Validating simulations
Can we really trust that what we are simulating will hold in practice?
Some problems:
Epistemic opacity: it is hard and sometimes even impossible to trace all the steps and calculations necessary for us to arrive in a certain result with a simulation. Thus, we can say the simulation is not transparent.
Are they better or worse than material experiments? This is still a big discussion in academic literature: the materiality argument, that would pose that it is still better to make our experimentation in the real world. However, even in “material experiments”, how “real” the world we are dealing is still real?
What impact do they have in our way of doing science and promoting virtue? Is science now just standing in front of a screen dealing with abstract entities?