Many researchers do believe that providing a detailed description of the experiments and the algorithms used is sufficient to guarantee reproducibility. In this paper, we argue that this is largely false. To assert this, we tried to reproduce the experiments of another work. The authors of this paper were aware of the need for their work to be reproducible: they made their data available in one of the authors' website, and they provide a detailed description of their methodology (only the code is missing). Nevertheless, despite all the care they took, we were not able to reproduce their work and our numerical findings are significantly different from theirs. Without their code, we cannot be sure if there is a bug (in their implementation or in ours) or a difference in the interpretation of the model. This raises a number of ethical questions for the community: what is the validity of science if numerical results cannot be trusted? Instead of developing new methodology, should we not spend more time reimplementing existing methods, making them available to all? This may lead to a less productive, yet more trustworthy and reliable, science.