Behavioral and Cognitive Robotics
An adaptive perspective

Stefano Nolfi

© Stefano Nolfi, 2021   |   How to cite this book   |   Send your feedback   |   Collaborate


Index


References

Abadi M., Barham P., Chen J., Chen Z., Davis A., Dean J. et al. (2016). Tensorflow: A system for large-scale machine learning. In 12th Symposium on Operating Systems Design and Implementation (16): 265-283.

Achiam J. (2018). Openai spinning up. GitHub, https://github.com/openai/spinningup. See also https://spinningup.openai.com/

Andrychowicz M., Baker B., Chociej M. et. al. (2018). Learning dexterous in-hand manipulation. arXiv:1808.00177v5

Andrychowicz M., Wolski F., Ray A., Schneider J., Fong R., Welinder P. ... & Zaremba W. (2017). Hindsight experience replay. arXiv preprint arXiv:1707.01495.

Argalla B.D, Chernova S., Veloso M & Browning B. (2009). A survey of robot learning from demonstration. Robotics and Autonomous Systems, (57) 5: 469-483.

Arkin R. (1998). Behavior-based Robotics. Cambridge, MA: MIT Press.

Asada M. & Cangelosi A. (in press). Cognitive Robotics Handbook. Cambridge, MA: MIT Press.

Auerbach J.E., Aydin D., Maesani A., Kornatowski P.M., Cieslewski T., Heitz G., Fernando P.R., Loshchilov I., Daler L & Floreano D. (2014). RoboGen: Robot generation through artificial evolution. In H. Sayama, J. Reiffel, S. Risi, R. Doursat, & H. Lipson (Eds.), Proceedings of the Fourteenth International Conference on the Synthesis and Simulation of Living Systems (ALIFE 14). New York: The MIT Press.

Badia, A. P., Piot, B., Kapturowski, S., Sprechmann, P., Vitvitskyi, A., Guo, Z. D., & Blundell, C. (2020, November). Agent57: Outperforming the atari human benchmark. In International Conference on Machine Learning (pp. 507-517). PMLR.

Baldassarre G., Parisi D. & Nolfi S. (2006). Distributed coordination of simulated robots based on self-organisation. Artificial Life, 12(3):289-311.

Baldassarre G., Trianni V., Bonani M., Mondada F., Dorigo M. & Nolfi S. (2007). Self-organised coordinated motion in groups of physically connected robots. IEEE Transactions on Systems, Man, and Cybernetics, 37(1):224-239.

Baldassarre, G., & Mirolli, M. (Eds.). (2013). Intrinsically motivated learning in natural and artificial systems. Berlin: Springer Verlag.

Bansal T., Pachocki J., Sidor S., Sutskever I. & Mordatch, I. (2017). Emergent complexity via multi-agent competition. arXiv preprint arXiv:1710.03748.

Beer R.D. (1995). On the dynamics of small continuous-time recurrent neural networks. Adaptive Behavior 3(4): 469-509.

Bellemare M.G., Naddaf Y., Veness J & Bowling M. (2013). The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253–279.

Billard A. & Grollman D. (2013). Robot learning by demonstration. Scholarpedia, 8 (12): 3824.

Billard A.G., Calinon S. & Dillmann R. (2016). Learning from humans. In B. Siciliano & O. Khatib (Eds.), Handbook of Robotics, II Edition. Berlin: Springer Verlag.

Bonabeau E., Dorigo M. & Theraulaz G. (1999). Swarm Intelligence: From Natural to Artificial Systems. Oxford, U.K.: Oxford University Press.

Bonani M., Longchamp V., Magnenat S., Retornaz P., Burnier D., Roulet G., Vaussard F. & Mondada F. (2010). The marXbot, a miniature mobile robot opening new perspectives for the collective-robotic research. IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, 2010: 4187-4193, doi: 10.1109/IROS.2010.5649153.

Bourgine P. & Stewart J. (2006). Autopoiesis and cognition. Artificial Life (10) 3: 327-345.

Braitenberg V. (1986). Vehicles: Experiments in Synthetic Psychology. Cambridge, MA: MIT press.

Brockhoff D., Auger A., Hansen N., Arnold D.V. & Hohm T. (2010). Mirrored sampling and sequential selection for evolution strategies. In International Conference on Parallel Problem Solving from Nature. Berlin, Germany: Springer Verlag.

Brockman G., Cheung V., Pettersson L, Schneider J., Schulman J., Tang J. and Zaremba W. (2016). OpenAI Gym. arXiv:1606.01540

Brooks R.A. (1986). A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation (2) 1: 14-23.

Brooks R.A. (1986). A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation (2) 1: 14-23.

Brooks R.A. (1991). New approaches to robotics. Science, 253:1227-1232.

Burda, Y., Edwards, H., Storkey, A., & Klimov, O. (2018). Exploration by random network distillation. arXiv preprint arXiv:1810.12894.

Camazine S., Deneubourg J.-L., Franks N.R., Sneyd J., Theraulaz G. & Bonabeau E. (2001). Self-Organization in Biological Systems. Princeton University Press.

Cangelosi A. & Parisi D. (2012). Simulating the evolution of language. Springer Science & Business Media.

Cangelosi A. & Schlesinger M. (2015). Developmental Robotics: From Babies to Robots. Cambridge, MA: MIT press.

Cangelosi A. & Schlesinger M. (2015). Developmental robotics: From babies to robots. MIT press.

Cangelosi, A., Metta, G., Sagerer, G., Nolfi, S., Nehaniv, C., Fischer, K., ... & Zeschel, A. (2010). Integration of action and language knowledge: A roadmap for developmental robotics. IEEE Transactions on Autonomous Mental Development, 2(3), 167-195.

Carvalho J.T. & Nolfi S. (2016). Behavioural plasticity in evolving robots. Theory in Biosciences: 135(4): 201–216

Carvalho J.T. & Nolfi S. (2016). Cognitive offloading does not prevent but rather promotes cognitive development. PLoS ONE. 11(8): e0160679.

Chemero A. (2011). Radical Embodied Cognitive Science. Cambridge, MA: MIT Press.

Cheney N., Clune J. & Lipson H. (2014). Evolved electrophysiological soft robots. In H. Sayama, J. Reiffel, S. Risi, R. Doursat, & H. Lipson (Eds.), Proceedings of the Fourteenth International Conference on the Synthesis and Simulation of Living Systems (ALIFE 14). New York: The MIT Press.

Collins S., Ruina A., Tedrake R & Wisse M. (2005). Efficient bipedal robots based on passive-dynamic walkers. Science 307 (5712): 1082–1085.

Coumans E. & Bai Y. (2016). Pybullet, a python module for physics simulation for games, robotics and machine learning. https://pybullet.org, 2016–2019.

Cully A., Clune J., Tarapore D. & Mouret J-B. (2015). Robots that can adapt like animals. Nature 521: 503-507.

De Greef J. & Nolfi S. (2010). Evolution of implicit and explicit communication in a group of mobile robots. In S. Nolfi & M. Mirolli (Eds.), Evolution of Communication and Language in Embodied Agents. Berlin: Springer Verlag.

De Jong E.D. (2005). The maxsolve algorithm for coevolution. In Genetic and Evolutionary Computation (GECCO 2005), Lecture Notes in Computer Science, Vol 3120. Chicago, USA: Springer-Verlag. In H.G. Beyer et al. (Eds).  GECCO 2005: Proceedings of the 2005 conference on Genetic and evolutionary computation. New York: ACM Press.

Dhariwal P., Hesse C., Plappert M., Radford A., Schulman J., Sidor S., and Wu Y. (2017). Openai baselines. https://github.com/openai/baselines.

Dillman R. (2004). Teaching and learning of robot tasks via observation of human performance. Robotics and Autonomous Systems, (47) 2-3: 109-116.

Dorigo M., Birattari M. & Brambilla M. (2014). Swarm robotics. Scholarpedia 9 (1): 1463.

Duarte M., Costa V., Gomes J., Rodrigues T., Silva F., Oliveira S.M. et al. (2016). Evolution of collective behaviors for a real swarm of aquatic surface robots. PLoS ONE 11(3): e0151834. https://doi.org/10.1371/journal.pone.0151834

Ferrante E., Turgut A.E., Duenez-Guzman E., Dorigo M. & Wenseleers T. (2015). Evolution of self-organized task specialization in robot swarms. PLoS Computational Biology, 11(8): e1004273.

Floreano D. & Nolfi S. (1997). Adaptive behavior in competing co-evolving species. In P. Husband & I. Harvey (Eds), Proceedings of the Fourth Conference on Artificial Life, MIT Press, Cambridge, MA, 378-387

Floreano D. & Urzelai J. (2000). Evolutionary robots with online self-organization and behavioral fitness. Neural Networks, 13: 431-443.

Floreano D., Dürr P. & Mattiussi C. (2008). Neuroevolution: from architectures to learning. Evolutionary intelligence, 1(1): 47-62.

Floreano D., Mitri S., Magnenat A. & Keller L. (2007) Evolutionary conditions for the emergence of communication in robots. Current Biology 17:514-519.

Floreano D., Nolfi S. & Mondada. F. (1998). Competitive Co-Evolutionary Robotics: From Theory to Practice. In R. Pfeifer, B. Blumberg, J-A. Meyer, S.W. Wilson (Eds.), From Animals to Animats V, Cambridge, MA: MIT Press, pp 512-524.

Freeman D., Ha D. & Metz L. (2019). Learning to predict without looking ahead: World models without forward prediction. In Advances in Neural Information Processing Systems (pp. 5379-5390).

Fukushima K. (1979). Neural network model for a mechanism of pattern recognition unaffected by shift in position - Neocognitron. Trans. IECE, J62-A(10): 658–665.

Gauci J. & Stanley K.O. (2010). Autonomous evolution of topographic regularities in artificial neural networks. Neural Computation, 22: 1860–1898.

Gers F.A. & Schmidhuber, J. (2001). LSTM recurrent networks learn simple context free and context sensitive languages. IEEE Transactions on Neural Networks, 12(6):1333–1340.

Gibson J. (1979). The Ecological Approach to Visual Perception. Boston: Houghton-Mifflin.

Gilbert S.J. (2015). Strategic offloading of delayed intentions into the external environment. The Quarterly Journal of Experimental Psychology, 68 (5): 971-992.

Glorot X. & Bengio Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics (pp. 249-256).

Gomez F. & Miikkulainen R. (1997). Incremental evolution of complex general behavior. Adaptive Behavior, 5(3-4): 317-342.

Grey Walter W. (1953). The Living Brain. G. Duckworth London, W.W. Norton, New York.

Ha D. & Schmidhuber J. (2018). World models. arXiv:1803.10122.

Hansen N. & Ostermeier A. (2001). Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, (9) 2: 159–195.

Harnad S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1-3): 335-346.

Harvey I., Husbands P. & Cliff D. (1994). Seeing the light: Artificial evolution, real vision. From Animals to Animats, 3, 392-401.

Helbing D. (1991) A mathematical model for the behavior of pedestrians. Behavioral Science, 36: 298-310.

Helbing D. et al. (1995) Social force model for pedestrian dynamics. Phys. Rev. E 51, 4282

Helbing D., Buzna L., Johansson A. & Werner T. (2005). Self-organized pedestrian crowd dynamics: Experiments, simulations, and design solutions. Transportation Science, 39 (1):1-24.

Hill A., Raffin A., Ernestus M., Gleave A., Kanervisto A., Traore R., Dhariwal P., Hesse C.,  Klimov O., Nichol A., Plappert M., Radford A.,  Schulman J., Sidor S. and Wu Y. (2018). Stable baselines. https://github.com/hill-a/stable-baselines.

Hochreiter S. (1991). Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut fuer Informatik, Lehrstuhl Prof. Brauer, Tech. Univ. Munich.

Hochreiter S. and Schmidhuber J. (1997). Long Short-Term Memory. Neural Computation, 9(8):1735–1780. Based on TR FKI-207-95, TUM (1995).

Holland J.H. (1975). Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, 1975.

Holland J.H. (1992). Adaptation in Natural and Artificial Systems, 2nd edition. MIT Press, Cambridge, MA, 1992.

Hu W., Turk G & Liu C.K. (2018). Learning symmetric and low-energy locomotion. ACM Transactions on Graphics (37): 4, 144:156.

Husband P., Smith T., Jakobi N & O’Shea M. (1998). Better living through chemistry: Evolving GasNets for robot control. Connection Science 10(4): 185-210.

Iizuka H. & Ikegami T. (2004). Adaptability and diversity in simulated turn-taking behavior. Artificial Life, 10(4): 361-378.

Ishida T. (2004). Development of a small biped entertainment robot QRIO. In Micro-Nanomechatronics and Human Science, 2004 and The Fourth Symposium Micro-Nanomechatronics for Information-Based Society, (pp. 23-28). IEEE Press.

Jakobi N. (1997). Evolutionary robotics and the radical envelope-of-noise hypothesis. Adaptive behavior, 6 (2): 325-368.

Joachimczak M., Suzuki R. & Arita T. (2016). Artificial metamorphosis: evolutionary design of transforming, soft-bodied robots. Artificial life, 22 (3): 271-298.

Kamimura A., Kurokawa H., Yoshida E., Murata S., Tomita K. & Kokaji S. (2005). Automatic locomotion design and experiments for a modular robotic system. IEEE/ASME Transactions on mechatronics, 10(3), 314-325.

Kempka M., Wydmuch M., Runc G., Toczek, J. & Jaśkowski W. (2016). Vizdoom: A doom-based ai research platform for visual reinforcement learning. In 2016 IEEE Conference on Computational Intelligence and Games (CIG) (pp. 1-8). IEEE Press.

Kingma D.P. & Ba J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.

Klimov O. (2016). Carracing-v0. URL https://gym. openai.com/envs/CarRacing-v0/.

Kober J. & Peters J. (2011). Policy search for motor primitives in robotics. Machine Learning, 84 (1-2): 171-203.

Kormushev P., Calinon S. & and Caldwell D. (2010). Robot motor skill coordination with EM-based reinforcement learning. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)

Krichmar J.L., Seth A.K., Nitz D.A., Fleischer J.G. & Edelman (2005). Spatial navigation and causal analysis in a brain-based device modeling cortical–hippocampal interactions. Neuroinformatics, 5: 197-222.

Kriegman S., Blackiston D., Levin M. & Bongard J. (2020). A scalable pipeline for designing reconfigurable organisms. PNAS: 1910837117.

Lange S., Riedmiller M. & Voigtländer A. (2012). Autonomous reinforcement learning on raw visual input data in a real world application. In The 2012 international joint conference on neural networks (IJCNN) (pp. 1-8). IEEE.

Lehman, J., & Stanley, K. O. (2011). Abandoning objectives: Evolution through the search for novelty alone. Evolutionary computation, 19(2), 189-223.

Levine S., Pastor P., Krizhevsky A., Ibarz J. & Quillen D. (2017). Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research, 37 (4-5): 421-436.

Lillicrap T.P., Hunt J.J., Pritzel A. et al. (2015). Continuous control with deep reinforcement learning. arXiv:1509.02971.

Lipson H. & Pollack J.B. (2000) Automatic design and manufacture of robotic lifeforms. Nature 406 (6799): 974-978

Lorenz E.N. (1963). Deterministic nonperiodic flow. Journal of Atmospheric Sciences, (20): 130-141.

Luck K.S., Amor H.B. & Calandra R. (2019). Data-efficient co-adaptation of morphology and behaviour with deep reinforcement learning. arXiv:1911.06832v1

Massera G., Ferrauto T., Gigliotta O., and Nolfi S. (2013). FARSA: An open software tool for embodied cognitive science. In P. Lio', O. Miglino, G. Nicosia, S. Nolfi and M. Pavone (Eds.), Proceeding of the 12th European Conference on Artificial Life. Cambridge, MA: MIT Press.

Massera G., Ferrauto T., Gigliotta O., Nolfi S. (2014). Designing adaptive humanoid robots through the FARSA open-source framework. Adaptive Behavior, 22 (3):255-265

Massera G., Tuci E., Ferrauto T. & Nolfi S. (2010). The facilitatory role of linguistic instructions on developing manipulation skills, IEEE Computational Intelligence Magazine, (5) 3: 33-42.

Mattner J., Lange S. & Riedmiller M. (2012). Learn to swing up and balance a real pole based on raw visual input data. In: Proceedings of the 19th International Conference on Neural Information Processing (5) (ICONIP 2012), pp. 126–133. Dohar, Qatar.

May R.M. (1976). Simple mathematical models with very complicated dynamics. Nature, 261 (5560): 459-467.

McCulloch W. & Pitts W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5:115-133.

McGeer T. (1990). Passive dynamic walking. International Journal of Robotics Research 9 (2): 62–82.

Metta G., Sandini G., Vernon D., Natale L. & Nori F. (2008). The iCub humanoid robot: an open platform for research in embodied cognition. In Proceedings of the 8th workshop on performance metrics for intelligent systems, pp. 50-56.

Miconi T. (2008). Evolution and complexity: The double-edged sword. Artificial Life (14) 3: 325-334.

Miconi, T. (2016). Learning to learn with backpropagation of Hebbian plasticity. arXiv preprint arXiv:1609.02228.

Miglino O., Lund H.H. & Nolfi S. (1995). Evolving mobile robots in simulated and real environments. Artificial Life, (2) 4: 417-434,

Milano N. & Nolfi S. (2020). Autonomous Learning of Features for Control: Experiments with Embodied and Situated Agents. arXiv preprint arXiv:2009.07132.

Milano N. & Nolfi S. (2021). Automated curriculum learning for embodied agents: A neuroevolutionary approach. arXiv preprint arXiv:2102.08849.

Min H., Badia A.P., Mirza M. et al. (2016).  Asynchronous methods for deep reinforcement learning. Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016.

Mirolli M. & Parisi D. (2008). How produced biases can favor the evolution of communication: An analysis of evolutionary dynamics. Adaptive Behavior 16: 27-52.

Mitri S., Floreano D. & Keller L. (2009). The evolution of information suppression in communicating robots with conflicting interests. Pnas, 106, 15786–15790.

Mnih V., Kavukcuoglu K., Silver D., Rusu A. A., Veness J., Bellemare M. G., Graves A., Riedmiller M., Fidjeland A. K., Ostrovski G. et al.  (2015). Human-level control through deep reinforcement learning. Nature, 518: 529–533.

Mondada, F., Bonani, M., Raemy, X., Pugh, J., Cianci, C., Klaptocz, A., ... & Martinoli, A. (2009). The e-puck, a robot designed for education in engineering. In Proceedings of the 9th conference on autonomous robot systems and competitions (Vol. 1, No. CONF, pp. 59-65). IPCB: Instituto Politécnico de Castelo Branco.

Mondada, R., Franzi, E. & Ienne, P. (1993) Mobile robot miniaturization: A tool for investigation in control algorithms. In: Proceedings of the Third International Symposium on Experimental Robots, eds. T. Yoshikawa & F. Miyazaki. Kyoto, Japan.

Mouret J-B. & Doncieux S. (2009). Overcoming the bootstrap problem in evolutionary robotics using behavioral diversity. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC-2009). Washington, DC: IEEE Press.

Nilsson N.J. (1984). Technical Note No. 323. SRI International, Menlo Park, CA: USA. This is a collection of papers and technical notes, some previously unpublished, from the late 1960s and early 1970s.

Nishimoto R., & Tani J. (2009). Development of hierarchical structures for actions and motor imagery: a constructivist view from synthetic neuro-robotics study. Psychological Research, 73, 545-558.

Nolfi S. (1996). Adaptation as a more powerful tool than decomposition and integration. In: T.Fogarty and G.Venturini (Eds.), Proceedings of the Workshop on Evolutionary Computing and Machine Learning, 13th International Conference on Machine Learning, University of Bari, Italy.

Nolfi S. (2000). Evorobot 1.1 user manual. Technical Report, Roma, Italy: Institute of Psychology, National Research Council. 

Nolfi S. (2005). Categories formation in self-organizing embodied agents. In H. Cohen & C. Lefebvre (Eds), Handbook of Categorization in Cognitive Science. Oxford, UK: Elsevier.

Nolfi S. (2009). Behavior and cognition as a complex adaptive system: Insights from robotic experiments. In C Hooker (Ed.), Handbook of the Philosophy of Science. Volume 10: Philosophy of Complex Systems. General editors: Dov M. Gabbay & Paul Thagard and John Woods. Elsevier.

Nolfi S. & Floreano D. (1998). Co-evolving predator and prey robots: Do 'arm races' arise in artificial evolution? Artificial Life, 4(4): 311-335.

Nolfi S. & Floreano D. (1999). Learning and evolution. Autonomous robots, 7(1): 89-113.

Nolfi S. & Floreano D. (2000). Evolutionary Robotics: The Biology, Intelligence, and Technology of Self-Organizing Machines. Cambridge, MA: MIT Press/Bradford Books.

Nolfi S. & Mirolli M. (2010). Evolution of Communication and Language in Embodied Agents. Berlin: Springer Verlag.

Nolfi S. and Gigliotta O. (2010). Evorobot*: A tool for running experiments on the evolution of communication. In S. Nolfi & M. Mirolli (Eds.), Evolution of Communication and Language in Embodied Agents. Berlin: Springer Verlag.

Nolfi S., Bongard J., Husband P. & Floreano D. (2016). Evolutionary Robotics, in B. Siciliano and O. Khatib (eds.), Handbook of Robotics, II Edition. Berlin: Springer Verlag.

Omer A.M.M., Ghorbani R. Lim H. & Takanishi A. (2009). Semi-passive dynamic walking for biped walking robot using controllable joint stiffness based on dynamic simulation. Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics. Singapore: IEEE Press.

Onofrio G., Pezzulo G. & Nolfi S. (2011). Evolution of a predictive internal model in an embodied and situated agent. Theory in Biosciences, vol. 130(4): 259-276.

Oudeyer P-Y., Kaplan F. & Hafner V.F. (2007). Intrinsic motivation systems for autonomous mental development. IEEE Transactions on Evolutionary Computation, 11:265–286.

Pagliuca and Nolfi (2020). The dynamics of body and brain co-evolution. arXiv:2011.11440.

Pagliuca P. & Nolfi S. (2019). Robust optimization through neuroevolution. PLoS ONE 14 (3): e0213193.

Pagliuca P. & Nolfi S. (2020). The dynamic of body and brain co-evolution. arXiv preprint arXiv:2011.11440.

Pagliuca P., Milano N. & Nolfi S. (2018). Maximizing adaptive power in neuroevolution. PLoS One, 13(7): e0198788.

Pagliuca P., Milano N. & Nolfi S. (2019). Efficacy of modern neuro-evolutionary strategies for continuous control optimization. arXiv:1912.05239.

Pagliuca P., Milano N. & Nolfi, S. (2018). Maximizing adaptive power in neuroevolution. PloS one, 13(7): e0198788.

Petrosino G., Parisi D. & Nolfi S. (2013). Selective attention enables action selection: evidence from evolutionary robotics experiments. Adaptive Behavior, 21(5):356-370

Pfeifer R. & Bongard J. (2016). How The Body Shapes the Way We Think: A New View of Intelligence. Cambridge, MA: MIT Press.

Pfeifer R. & Scheier C. (1999). Understanding Intelligence. Cambridge, MA: MIT Press.

Pfeifer R., Iida F & Gómez G. (2006). Morphological computation for adaptive behavior and cognition. International Congress Series, 1291: 22-29. Berlin, Germany: Springer Verlag.

Philippides A.O., Husband P., Smith T & O’Shea M. (2005). Flexible coupling: Diffusing neuromodulators and adaptive robotics. Artificial Life 11 (1-2): 139-160.

Pugh J.K., Soros L.B. & Stanley K.O. (2016). Quality diversity: A new frontier for evolutionary computation. Frontiers in Robotics and AI, 3, 40.

Quigley M., Conley K., Gerkey B., Faust J., Foote,T., Leibs J., ... & Ng A.Y. (2009). ROS: an open-source Robot Operating System. In ICRA workshop on open source software (Vol. 3, No. 3.2, p. 5).

Raffin A. (2018). RL Baselines Zoo. Github repository. https://github.com/araffin/rl-baselines-zoo

Rawat W. & Wang Z. (2017). Deep convolutional neural networks for image classification: A comprehensive review. Neural computation, 29(9), 2352-2449.

Real E., Moore S., Selle A., Saxena S., Suematsu Y.L., Tan J., Le Q.V. & Kurakin A. (2017). Large-scale evolution of image classifiers. In D. Precup & Y.W. The (Eds.), Proceedings of the 34th International Conference on Machine Learning, pp. 2902–2911.

Rechenberg I. & M. Eigen. (1973). Evolutionsstrategie: Optimierung Technischer Systeme nach Prinzipien der Biologischen Evolution. Frommann-Holzboog Stuttgart.

Reid C.R., Lutz M.J., Powell S., Kao A.B., Couzin I. D. & Garnier S. (2015). Army ants dynamically adjust living bridges in response to a cost–benefit trade-off. Proceedings of the National Academy of Sciences, 112(49): 15113-15118.

Reynold C.W. (1987). Flocks, herds and schools: A distributed behavioral model. In M.C. Stone (Ed.), Proceedings of the 14th annual conference on Computer graphics and interactive techniques. New York: Association for Computing Machine.

Rosin C.D. & Belew R.K. (1997). New methods for competitive coevolution. Evolutionary Computation, (5) 1:1-29.

Sadeghi F. & Levine S. (2017). CAD2RL: Real single-image flight without a single real image. arXiv:1611.04201v4

Salimans T., Ho J., Chen X., Sidor S & Sutskever I. (2017). Evolution strategies as a scalable alternative to reinforcement learning. arXiv:1703.03864v2

Sandini G., Metta G. & Vernon D. (2004).  Robotcub: An open framework for research in embodied cognition, International Journal of Humanoid Robotics (8) 2: 18–31.

Schaff C., Yunix D. Chakrabarti A. & Walter M.R. (2019).  Jointly learning to construct and control agents using deep reinforcement learning. International Conference on Robotics and Automation (ICRA) Palais des congres de Montreal, Montreal, Canada.

Schmidhuber J.  (1991).  Curious model-building control systems. In Proceedings of the International Joint Conference on Neural Networks. Singapore, vol. 2, pp. 1458–1463.

Schmidhuber J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85-117

Schmidhuber, J. (2010). Formal theory of creativity, fun, and intrinsic motivation (1990–2010). IEEE Transactions on Autonomous Mental Development, 2(3), 230-247.

Schulman J., Levine S., Abbeel P., Jordan M.I. & Moritz P. (2015). Trust region policy optimization. In ICML, pp. 1889–1897.

Schulman J., Wolski F., Dhariwal P., Radford A. & Klimov O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.

Schulman J., Wolski F., Dhariwal P., Radford A. & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.

Schwefel H.P. (1981). Numerical Optimization of Computer Models, Wiley, New York, 1981.

Schwefel H.P. (1995). Evolution and Optimum Seeking, Wiley, New York, 1995.

Sehnke F., Osendorfer C., Ruckstieb T., Graves A., Peters J., and Schmidhuber J. (2010). Parameter-exploring policy gradients. Neural Networks, 23(4):551–559.

ShadowRobot (2005). ShadowRobot Dexterous Hand. https:// www.shadowrobot.com/products/dexterous-hand/.

Simione L. & Nolfi S. (2019). Long-term progress and behavior complexification in competitive co-evolution. arXiv:1909.08303.

Sims K. (1994). Evolving 3D morphology and behavior by competition. Artificial Life 1 (4): 353-372.

Skoglund A., Iliev B., Kadmiry B. & Palm R. (2007). Programming by demonstration of pick-and-place tasks for industrial manipulators using task primitives. International Symposium on Computational Intelligence in Robotics and Automation (CIRA 2007).

Skolicki Z. & Jong K. D. (2004).  Improving evolutionary algorithms with multi-representation island models. In Parallel Problem Solving from Nature – PPSN VIII 8th International Conference. Springer-Verlag.

Sperati V., Trianni V. & Nolfi S. (2011). Self-organised path formation in a swarm of robots. Swarm Intelligence, 5:97-119.

Spivey M. (2007). The Continuity of Mind. New York: Oxford University Press.

Spröwitz A., Tuleu A., Vespignani M., Ajallooeian M., Badri E. & Ijspeert A. J. (2013). Towards dynamic trot gait locomotion: Design, control, and experiments with Cheetah-cub, a compliant quadruped robot. The International Journal of Robotics Research 32(8): 932–950.

Stanley K.O. & Miikkulainen R. (2002). Evolving neural networks through augmenting topologies. Evolutionary Computation, vol. 10 (2): 99–127.

Stanley K.O., D’Ambrosio D.B. & Gauci J. (2009). A hypercube-based indirect encoding for evolving large-scale neural networks. Artificial Life, vol. 15 (2): 185–212.

Strogatz S. (2001). Nonlinear Dynamics and Chaos. Western Press

Sugita Y. & Tani J. (2005). Learning semantic combinatoriality from the interaction between linguistic and behavioral processes. Adaptive Behavior, (13) 1: 33–52.

Sutton R.S. & Barto A.G. (2018). Reinforcement learning: An introduction. 2nd Edition. MIT press.

Taleb N.N. (2012). Antifragile: Things that Gain from Disorder. New York: Random House.

Tani J. (2016). Exploring Robotic Minds: Actions, Symbols, and Consciousness as Self-Organizing Dynamic Phenomena. New York: Oxford University Press.

Tedrake R. (2019). Underactuated Robotics: Algorithms for Walking, Running, Swimming, Flying, and Manipulation (Course Notes for MIT 6.832). Downloaded on November 2019 from underactuated.mit.edu.

Thelen E. & Smith L.B. (1994).  A Dynamic Systems Approach to the Development of Cognition and Action. Cambridge, MA: MIT Press.

Todorov E., Erez T. & Tassa Y. (2012). Mujoco: A physics engine for model-based control. In Proceeding of the IEEE/RSJ Intelligent Robots and Systems Conference (IROS), 2012 IEEE/RSJ International Conference on, pages 5026–5033. IEEE.

Tuci E., Ferrauto T., Zeschel A., Massera G. & Nolfi S. (2011). An Experiment on behaviour generalisation and the emergence of linguistic compositionality in evolving robots, IEEE Transactions on Autonomous Mental Development, (3) 2:176-189.

Tuci E., Massera G. & Nolfi S. (2010). Active categorical perception of object shapes in a simulated anthropomorphic robotic arm, Transaction on Evolutionary Computation Journal, (14) 6: 885-899.

Ude A., Atkeson C.G. & Riley M. (2004). Programming full-body movements for humanoid robots by observation, Robotics and Autonomous Systems, (47):2–3: 93–108.

Varela F. J., Thompson E. T. & Rosch E. (1991). The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA: MIT Press.

von Hofsten C. & Ronnqvist L. (1988). Preparation for grasping an object: A developmental study. Journal of Experimental Psychology Human Perception and Performance, 14 (4): 610-621.

Weng J., Ahuja N. & Huang T. S. (1993). Learning recognition and segmentation of 3-D objects from 2-D images. Proceeding of the 4th International Conference on Computer Vision. Berlin, Germany. pp. 121-128.

Werbos P.J. (1990). Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10), 1550-1560.

West-Eberhard M.J. (2003) Developmental plasticity and evolution. Oxford University Press, New York

Whitley D., Rana S., & Heckendorn R.B. (1997). Island model genetic algorithms and linearly separable problems. In Selected Papers from AISB Workshop on Evolutionary Computing, volume 1305 of Lecture Notes In Computer Science, pages 109-125. Springer-Verlag.

Wierstra D., Schaul T., Glasmachers T., Sun Y., Peters J. & Schmidhuber J. (2014). Natural evolution strategies. The Journal of Machine Learning Research, 15(1), 949-980.

Yamashita Y. & Tani J. (2008). Emergence of functional hierarchy in a multiple timescale neural network model: a humanoid robot experiment. PLoS Computational Biology, 4 (11): e1000220.