| Foreword | 7 |
|---|
| Preface | 9 |
|---|
| Contents | 12 |
|---|
| 1 Introduction | 17 |
|---|
| 1.1 Abstract | 17 |
| 1.2 Why Robotic Musicianship | 17 |
| 1.3 Sound Production and Design—Survey | 19 |
| 1.3.1 Traditional Instruments | 20 |
| 1.3.2 Augmented and Novel Instruments | 24 |
| 1.4 Musical Intelligence | 25 |
| 1.4.1 Sensing and Perception | 26 |
| 1.4.2 Music Generation | 30 |
| 1.5 Embodiment | 32 |
| 1.6 Integrating Robotic Musicianship into New Interfaces | 34 |
| 1.6.1 Musical Companion Robots | 34 |
| 1.6.2 Wearable Robotic Musicians | 35 |
| 1.7 Discussion | 36 |
| References | 37 |
| 2 Platforms—Georgia Tech's Robotic Musicians | 41 |
|---|
| 2.1 Abstract | 41 |
| 2.2 Haile—A Robotic Percussionist | 42 |
| 2.2.1 Motivation | 42 |
| 2.2.2 Design | 42 |
| 2.3 Shimon—A Robotic Marimba Player | 47 |
| 2.3.1 Striker Design | 47 |
| 2.3.2 Mallet Motor Control | 50 |
| 2.3.3 Slider Motor Control | 53 |
| 2.3.4 Shimon's Socially Expressive Head | 56 |
| 2.4 Shimi—A Music Driven Robotic Dancing Companion | 59 |
| 2.4.1 Robotic Musical Companionship | 60 |
| 2.4.2 Design | 61 |
| 2.4.3 Software Architecture | 62 |
| 2.4.4 Core Capabilities | 63 |
| 2.5 The Robotic Drumming Prosthetic | 66 |
| 2.5.1 Motivation | 67 |
| 2.5.2 Related Work | 68 |
| 2.5.3 Platform | 69 |
| 2.5.4 Generative Physical Model for Stroke Generation | 70 |
| 2.5.5 Conclusions | 75 |
| References | 75 |
| 3 ``Listen Like A Human''—Human-Informed Music Perception Models | 78 |
|---|
| 3.1 Abstract | 78 |
| 3.2 Rhythmic Analysis of Live Drumming | 79 |
| 3.2.1 Onset Detection | 79 |
| 3.2.2 Beat Detection | 79 |
| 3.2.3 Rhythmic Stability and Similarity | 80 |
| 3.2.4 User Study | 83 |
| 3.3 Tonal Music Analysis Using Symbolic Rules | 84 |
| 3.3.1 Implementation | 85 |
| 3.3.2 Evaluation | 88 |
| 3.4 Music Analysis Using Deep Neural Networks | 91 |
| 3.4.1 Deep Musical Autoencoder | 91 |
| 3.4.2 Music Reconstruction Through Selection | 94 |
| 3.5 Real-Time Audio Analysis of Prerecorded Music | 95 |
| 3.5.1 Introduction | 95 |
| 3.5.2 Previous Work | 97 |
| 3.5.3 System Design | 97 |
| 3.5.4 Live Audio Analysis | 98 |
| 3.5.5 Gesture Design | 101 |
| 3.5.6 Network Design | 105 |
| 3.5.7 User Study | 107 |
| 3.5.8 Summary | 108 |
| References | 109 |
| 4 ``Play Like A Machine''—Generative Musical Models for Robots | 110 |
|---|
| 4.1 Abstract | 110 |
| 4.2 Genetic Algorithms | 111 |
| 4.2.1 Related Work | 111 |
| 4.2.2 Method | 111 |
| 4.3 Markov Processes (``Playing with the Masters'') | 115 |
| 4.3.1 Related Work | 115 |
| 4.3.2 Implementation | 115 |
| 4.3.3 Summary | 119 |
| 4.4 Path Planning Driven Music Generation | 120 |
| 4.4.1 Search and Path Planning | 120 |
| 4.4.2 Musical Path Planning | 121 |
| 4.4.3 Planning | 123 |
| 4.4.4 Evaluation | 128 |
| 4.4.5 Discussion | 130 |
| 4.5 Rule Based Jazz Improvisation | 130 |
| 4.5.1 Parametrized Representations of Higher-Level Musical Semantics | 131 |
| 4.5.2 Joint Optimization | 138 |
| 4.5.3 Musical Results | 140 |
| 4.5.4 Discussion | 142 |
| 4.6 Neural Network Based Improvisation | 143 |
| 4.6.1 Introduction | 144 |
| 4.6.2 Semantic Relevance | 146 |
| 4.6.3 Concatenation Cost | 147 |
| 4.6.4 Ranking Units | 148 |
| 4.6.5 Evaluating the Model | 149 |
| 4.6.6 Discussion | 149 |
| 4.6.7 Subjective Evaluation | 150 |
| 4.6.8 Results | 150 |
| 4.6.9 An Embodied Unit Selection Process | 152 |
| 4.7 Conclusion | 154 |
| References | 155 |
| 5 ``Be Social''—Embodied Human-Robot Musical Interactions | 158 |
|---|
| 5.1 Abstract | 158 |
| 5.2 Embodied Interaction with Haile | 158 |
| 5.2.1 Interaction Modes | 159 |
| 5.2.2 Leader-Follower Interaction | 161 |
| 5.2.3 Evaluation | 162 |
| 5.2.4 Data Analysis | 165 |
| 5.2.5 Results | 168 |