Hello.
In the past few days, I started implementing the audio engine. If you haven't read the previous posts, here they are: part 1, part 2, part 3.
I broke down the project into a few implementation phases:
1) Just make some sound as the output
2) Just make a simple virtual instrument play an output
3) Make multiple simple virtual instruments play their outputs together
4) Just make a simple effect for a virtual instrument
5) Process effects sequentially for a track and generate a single output buffer representing the track.
6) Just play an audio file
7) Write to an audio file
8) Multiple callback functions for different occasions.
At the end of phase 7, the engine will have the basic functionality. I can come back to each phase and enhance them or add new stuff to them.
I'm currently in the beginning of phase 4. The UML chart of the project that I worked on for a few weeks is helping a lot, although some details of it change as the project goes on. I'll put the diagram on the GitHub repo once I update it.
It's pretty easy to build the project thanks to cmake scripts, although I haven't checked if it works properly on other computers and OSs yet (dependency issues.)
The GitHub repository is public and available here! It doesn't have documentation or a wiki yet. I may use doxygen but I still have to add comments for the code. It does have a kanban board on GitHub so everyone can see how it's progressing.
Top comments (0)