To read it go here:
A history lesson by Android authority.
To read it go here:
This is the first article in a series where developers talk about different issues regarding making applications for Android. To start with andreas the developer of G-Stomper Studio, VA-Beast and G-Stomper Rhythm will talk about latency as this is something that concerns many music makers that use Android.
If you are not a programmer do not be scared to read it as it is also understandable for non-programmers!
So thank you so much Andreas for taking the time to write and share with us all!
Latency in audio applications is probably one of the most discussed and also one of the most annoying issues on the Android platform. Understanding and handling latency the right way can be a mighty jungle, especially if you’re a “normal” developer, and not a scientist.
This article is focused on output latency on Android devices, not input or round-trip latency. Hopefully someday I’ll be able to write about input latency as well, but so far input and round-trip was no issue in my applications. So the term latency in this article is always meant as output latency. Also please forgive me when I forget some scientific details. It is neither my goal nor am I able to write a scientific paper about latency on Android. What you read is my personal experience with the different aspects of output latency on the Android platform.
Output Latency, what is it?
In short, output latency is the time from the moment you press a button or a piano key until you hear the sound from the speakers. And, output latency in audio applications is something we all want to get rid of.
The complete output latency in a musical application, which includes live playing, is a combination of the following 3 main factors:
1. Control Input Latency (e.g. display reaction time)
2. Application Latency (everything that happens in the app layer)
3. Audio System Latency (everything that happens in the system layer)
Control Input Latency (e.g. display reaction time)
The Control Input latency is the time from the moment you touch the screen (or an external MIDI Keyboard) until the audio system gets notified by the Android OS to do something. It is influenced by various factors, which strongly depend on your device and Android version. It can vary from a few milliseconds up to 300ms or even more. The Control Input Latency is under full control of the Android OS and the underlying hardware. There’s no way to optimize or measure it from inside an app. But can get rid of a good part of it by using a MIDI controller/keyboard. The reaction time of an external MIDI keyboard is usually around 30-40ms faster than the on screen controls. This may surprise you, but the undisputed king regarding display reaction time is still the Google/Samsung Galaxy Nexus (2011).
Audio Output Latency (everything after the Control Input Latency)
The Audio Output Latency is the time from the moment when an application starts to play a sound until you hear it from the speakers. The Audio Output Latency is hardware, operating system and app dependent. A good part of it can be optimized from inside the app (as long as the hardware and operating system allows it). The Audio Output Latency can vary from ~35ms up to over 250ms. Sure, there are apps that report latencies down to 10ms, but this is not the complete thing (more about this later).
Application Latency (everything that happens in the app layer)
“Application Latency” is not an official term. I call it that way that way because it happens in the main application, the audio app. Meant is the time from the moment when an application starts to play a sound (technically when it starts to fill an audio buffer) until it is passed (enqueued) to the underlying audio system (AudioTrack or OpenSLES). This part is under direct control of the audio application. It depends on the defined audio system main buffer size and the app internal buffering.
AudioTrack is the out of the box system, which guarantees to run stable on every Android device.
It is not thought to be used in real time audio applications, but since it’s the one and only ready-to-use system, it is used in most audio apps. AudioTrack has a device dependent minBufferSize which can be obtained by invoking AudioTrack.getMinBufferSize(). In short, AudioTrack has the full control over the minBufferSize as well as over the way the buffers are handled (once a buffer is passed to the AudioTrack system). The lowest ever reported minBufferSize by AudioTrack comes from the Google/Samsung Galaxy Nexus (2011) and corresponds to an application latency of 39ms at a sample rate of 44100Hz. More likely on modern non-Nexus devices are minBufferSizes around 80ms. Using smaller buffers with AudioTrack than the reported minBufferSize usually results in an initialization error.
The native OpenSLES system on the other hand allows more control. The buffer size as well as the way the buffers are handled is under responsibility of the app developer. OpenSLES allows smaller buffers than AudioTrack, of course only as long as a device can handle it. The smallest well working OpenSLES buffer size in the G-Stomper environment corresponds to an application latency of 10ms on Android 5.x and 20ms on Android 4.4 (both with a Nexus 9).
The application latency can be calculated with a simple formula:
= audioTrackByteBufferSize * 1000 / sampleRateHz / bytesPerSample / numChannels
= internalFloatBufferSize * 1000 / sampleRateHz
Now take the max of these two values and have the Application Latency.
On the Android platform, this value can vary from ~10ms up to ~200ms.
Audio System Latency (everything that happens in the system layer)
One of the biggest mistakes regarding output latency is the fact that most apps on report only the Application Latency. This looks of course nice (e.g. Nexus 7 2013/AudioTrack: 40ms), but it is only half the truth.
The moment a buffer is passed to AudioTrack for example does actually only mean that the buffer was enqueued to the AudioTrack internal buffer queue. But you never know exactly how much time will pass before the buffer will actually come out as a sound from the speakers. The time from the moment when a buffer is passed to the audio system until you actually hear it from the speakers, is what I call the “Audio System Latency”.
The Audio System Latency comes in addition to the Application Latency and strongly depends on the audio system internal buffer pipeline (buffer queue, resampling, D/A conversion, etc.). Regarding low latency, this is the most significant part of the latency chain, which reveals the obvious problem of AudioTrack. With AudioTrack, you don’t have any control over its internal buffer pipeline, and there’s no way to force a buffer to pass it more quickly. What you can do is to prepare the buffers as final as possible, e.g. do the resampling in the audio application and pass the buffers always at the systems native sample rate. Unfortunately this does not change the latency, but it avoids glitches due to Android internal resampling.
I’ve measured Audio System Latencies of over two times more than the Application Latency. In other words, if the Application Latency is 80ms, it can easily be that the full output latency is more than 240ms, which is ridiculous for a real time application.
What did Samsung in their Professional Audio SDK to achieve such low latencies?
I’m no scientist, but it’s quite obvious that they reduced the audio pipeline (application to speaker) to a minimum, and they did a very good job with impressive results. Unfortunately the SDK is for Samsung devices only, but it’s for sure a great pioneer work, and maybe it’ll motivate others to catch up. There’s a nice video presentation of the Samsung Professional Audio SDK on YouTube: https://www.youtube.com/watch?v=7r455edqQFM
For (supported) Samsung devices, it’s definitely a good thing to consider the integration of their SDK.
What can you do as an app developer to get a faster audio pipeline?
Go native! Using the native OpenSLES reduces the Audio System Latency significantly. Even if you work with the same buffer size as with AudioTrack, you’ll notice a big difference, especially on newer Android versions.
Using OpenSLES does not implicitly mean “low latency”, but it definitely allows lower latencies than AudioTrack, because all audio buffers are written directly to the audio hardware, without the AudioTrack API and Dalvik/ART runtime overhead. This means the audio pipeline is shorter and therefore faster.
“The Audio Programming Blog” provides good tutorials regarding the OpenSLES integration:
Also helpful is this article on GitHub:
The “Google I/O 2013 - High Performance Audio” presentation gives a good overview about low latency audio on Android in general.
Will the G-Stomper apps get OpenSLES support?
Yes, definitely. Actually, OpenSLES is already integrated as an experimental additional audio system. In the current version 4.0.4, it is exclusively available for Nexus devices. The upcoming version 4.0.5 will introduce the OpenSLES to all 1ghz Quad-Core (or faster) devices running on Android 4.2 (or higher). The default will still be AudioTrack, but users with supported devices will get a notification and will be able to manually switch to OpenSLES in the G-Stomper setup (Setup dialog / Audio / Audio System / OpenSL).
How can the full output latency get measured?
Unfortunately there’s no proper way to automatically calculate the full output latency (Control Input Latency + Application Latency + Audio System Latency) from inside an app. The only way to get real numbers is to measure it.
There’s an article on android.com, which shows a way to measure the full audio output latency (Application Latency + Audio System Latency) in use of an oscilloscope and the device’s LED indicator:
But honestly, by far not everyone has that equipment.
Here’s a simple way to measure full output latency:
The only things you need are a microphone, a PC with a graphical audio editor installed, and an Android device. While recording on the PC, hold the microphone close to the screen, and tap some button or piano key on the Android screen, which is supposed to play in a sound. Be sure to tap the screen hard enough, so that the tap is audible. The microphone will record both, the physical finger tap and the audio output. Then, in the audio editor on the PC, measure the gap between the two peaks (finger tap and audio output).
Be sure to make more than one recording and take the average of the measured times. Especially the display reaction time may vary over multiple taps.
There might also be a quite significant difference between the head phone jack (with an external speaker connected) and the device internal speakers. Using the device internal speakers may result in higher latencies because of the post processing, which is usually done for internal (low quality) speakers.
This is definitely not the most scientific and also not the most precise approach, but it’s precise enough to give you an idea of the real output latency. You’ll be surprised by the results.
This have been in the vaults for about two weeks or so as I wanted it to be posted unblemished by other news etc and hang out on the opening page for some days as it is an album that to my ears is very good. Wrote about the release before and asked the artist to write some words about it as it was made entirely in Caustic 3.
It is a dark and ambient work but in difference to much other similar fare it does not get on my nerves with endless meanderings into meaningless sound excercises or self important bombastic reflections on the dark side of existence.
In either case it is for free and can be downloaded from the Internet Archive (which is a good idea for anyone that do not want to compromise their releases with the stigma of supporting commercial enterprises and all it curtails...) it comes in all formats that you could desire and is for free.
Here is the article as written by D.N.P:
I never was much one for complex melodies, it reminded me of bloated 1970s rock and the worst kind of prog. I always preferred atmospheres and repetition, why skim over notes when you can let them breathe. My first exposure to dark music outside of metal and its doomy subgenres came in the form of the album 'Nostromo' by Sleep Research Facility.
I've never heard anything quite like it and it really did trigger off something inside me.
On entering the world of Android, I stumbled upon 'Caustic 2' and immediately started to experiment. Since the huge improvements made in the Caustic 3 upgrade, creating ' dark ambient ' is clearly possible despite maybe it being more suited to dance/techno styles. By stretching the bars to 8x and turning the project tempo right down you can allow the desired sounds to develop.
The PCM Synth has terrific versatility and by downtuning samples and cranking the attack, sustain and release knobs you can create unexpected noises and form a seamless drone that can pulsate, flow and ebb. Also you have to love how complex the Subsynth is too. I'm particularly keen on very slow, very deep sine wave oscillations that the subsynth allows.
Taking full advantage of the automation facilities, tweaking the filters can add new dimensions to the overall piece.
The modular synth unit is a complete mixed box for me, always have a fumble through it and often end up with some very unexpected results. You have to have some suprises! The FM synth can be amazing, just by clicking a button or changing the sequence of the synths can change one sound into another. Again experimentation is key to success with this beauty.
You can create some very ethereal feels using the Organ unit although I'm yet to fully take advantage of this....and I've not even tried the KS synth yet.
There's so much that lends Caustic 3 to dark ambient noise music....even the C-SFXR 'Easter egg' can create an eerie dark sound when downtuned and fed through the numerous effects available.
As for sound packs, I own pretty much all of them and have recorded quite a few samples of my own. You can't have enough 'raw material' at your disposal and the Single Cell Software site is continually updated with new presets.
I believe it would be nigh on impossible to find a better app than Caustic 3 to create sounds to scare your neighbours and close family. Go on, go horror....
Here is a review published:
To download the album:
Hey developers read the article and get some ideas how to help out with this great project!
To read the rest of the article go here:
A very positive overview of the NotateMe app that can be read here:
Article talking about different trackers and their respective strengths and weaknesses.
The last part dedicated to SunVox gets the most love of them all!
Here is a little tidbit that got picked up from the article:
Sean Booth of Autechre openly admits to using and enjoying Sunvox, it’s time to take a step back and seriously consider what’s truly worth using.
Have also seen of the ones I can remember Richard Devine praising SunVox and using it
and that many years ago when it did not have the capabilities it has now.
It is a worthwhile read and if worth reading especially if you never have tried SunVox out
as it will surely itch the curiousity. If you already use SunVox for sure there will be a lot of nodding your head in agreement!
To read the article:
If you read the information that is out there on the ANS synthesizer this will not bring anything new to the table. But it is cool that they talk about it:
Plus that they mention some other earlier experiments with sound from image that was used in film and it has been my intention to write about that subject for some time but...
blame the summer and lack of focus!
If you want to read more and have good links to more information plus also talking about the Virtual ANS app please go here:
Play Store link:
Here is an "article" from the Creators Project website regarding the app Geometric Music:
Play Store link:
Good, clear and to the point on how you can improve your low end if there is a usage of much bass frequencies in your music:
This is a nice article / post from one of the oldest ( or the oldest ) websites still going strong regarding mobile music making and how he sees Android and its musical possibilities.
One interesting thing is that he does not compare it with iOS but talks about Android on its own terms.
To read it go here:
and extra props for recommending Musical Android!!!
This is a semi interesting article about filmmaking with a smart phone.
Think that with the quality improving of the cameras that it is definitely possible to do some interesting videos to show over the internet using just a smartphone to demonstrate your music. The only problem that I see with many smart phones are quick movements as the phones are not fast enough, so to sidestep it would be to plan a lot of shots where the camera do not have to be moved.
Making a small video even if it is really simple, gets your music out there as Youtube / Vimeo etc are a good way to reach more listeners.
Do not dismiss it!
Here is a small assortment of news.
First up is that there will soon be a free Drum machine coming in about two or three weeks.
As G-Stomper Beat Studio has been expanding drastically over the last year to become much more than a drum machine / Groovebox Andreas the developer have decided to give a freebie away making a simpler application that will be focused on just making drum beats. It will in essence be a scaled of G-Stomper Beat Studio for people that just want to focus on drums/beats and not so much on Synthesizing and doing whole tracks.
It will come with the basic drum sample sets of G-Stomper Beat studio and with the option to get more add on packs if you need more. Also comes with a basic sequencer and all the other effects and groovebox implements. It is great news for people that looks for a drum machine but also for people that feel that the full version is to hard to grasp can use this version to get into things before upgrading.
So have written G-Stomper Beat Studio many times above and that is for a reason because it is probably for the last time as it will now be renamed to G-Stomper Studio. Makes sense as it is much more than a Drum machine / Groovebox now.
Lastly two more sample packs to go with G-Stomper Studio- (Ahh you see the first time writing G-Stomper Studio, sooo much less letters and makes things much easier for a lazy ass writer as me.) this time some Deep house drums...
To hear sound samples go here:
and as a bonus this Hip Hop poetry with some G-Stomper Studio music in the background.
So D.N.P uses Android applications for live playing and he agreed to write some about his experience. It is great because he uses a lot of different apps connected to a mixer and in the end of the post you can take a listen to his live set as rehearsed.
So please read on to see what D.N.P has to say.
The Live 'laptop' electronic experience is a funny affair.
The audience can never see exactly what's going on and often they aren't sure if you're doing anything at all. I'm sure in the past I've witnessed such acts who've just pressed 'play'.
In my own experiences of constructing a 'live' set I've attempted to debunk the 'he's just pressing play' audience mindset and I've discovered that by using multiple android devices you can do just that. That's the first problem. Multiple devices. you can only run one app at a time. Fortunately I'm a hoarder and so have 2 old smartphones as well as my current one. My main unit is a Nexus 7 tablet. Wire all 4 to a mini mixing desk along with an iPod to play bits of found sounds and speech and away you go.
The crux of my sounds are built in Caustic 3.
A truly wonderful and powerful app and the amount of virtual knob twiddling that can be achieved is endless. Great for a live situation. Alongside Caustic, Virtual ANS is fab for even photographing the audience or the setup and using those images to generate sounds. Saucillator, Yellofier, 8-bit Buckaroo, Ethereal dialpad, Pixelwave, Ambient synth pad, White noise generator, and my new favourite Psycho Flute can all be used on the fly with remarkable results. Shame I can't get Moon Synth to work on any of my devices. I like the look of that one.
Pixitracker, 1-bit Pixitracker, Reactable, Nanoloop and Sunvox are all on my to do list in future performances as well as a heavily processed electric ukulele.
Rehearsing is very important like music in any other discipline. A full knowledge of how your apps work and what they can achieve is of highest importance. Floundering in front of a paying audience does not look good.
Anyway I record all rehearsals and listen them back to see what worked, what didn't etc.
You often can't tell when you're 'in the moment'. I've put my last rehearsal on bandcamp for others to hear. With warts and all.
A couple of overloaded mixing desk situations at about 15 and 40 mins in for a few seconds. But you learn from making these mistakes.
Take a listen and get out of your bedrooms and onto a stage.
You're musicians after all.
D.N.P Live - Obsidian Ambient Terror
A small review of the show:
D.N.P Bandcamp site:
Please contact me if you are using Android applications partly or only when playing live;
at firstname.lastname@example.org as it is highly interesting and a nice thing to share with others that want to take the step out of the bedroom onto a stage.
Article from Digital Music News that can be good if you are new to Soundcloud or just to check out and see if there is something that you should be aware about.
A good nice article and with links to articles and some videos makes it a must go for anyone interested in Virtual ANS.
Recommend to go through the linked articles as they are interesting and will shed some light on aspects that may pass you by.
My self still stuck in ANS land yesterday evening and today working on sampled piano instrument using Virtual ANS.
IT JUST GETS DEEEEEEEEEEEEEEEEEEPEEEEEEEEEEEEEEEEER.
Link to the article:
Homepage and the free PC / Linux / OSX version:
I am proud to present this article written by the developer regarding his application Syntheogen which is the latest of the greatest.
As this is a Application that is still in BETA it is interesting to read where it comes from and where it is going and what was going on in his mind developing the application Syntheogen.
This article will be interesting for a lot of people for different reasons and am very grateful that Jeremy took the time to write this for all of us to read...
(He still calls it BETA but have not encountered any bugs and think that it is more a question of how and what extra functions will be added...)
Syntheogen Article for Musical Android
Jeremy Neal Kelly
Syntheogen ultimately was inspired by a second-hand drum machine I bought some twenty years ago. The machine was a Yamaha RY-10; it had a row of sixteen tiny buttons in the middle, each with a red LED above, and it was the first step sequencer I had used.
Though I love hearing music, I've rarely enjoyed making it; practice is a bore, synthesizer interfaces are maddening, and I always seem to lack the one cable I need to record my amazing riff. Step sequencers are the exception to that rule. They are fun, easy, and immediate; the step sequencer is the only interface I've seen where pressing buttons at random is actually a great idea.
They do have limitations, however. The hardware sequencers I've used offer only one row of buttons, so you cannot view or edit multiple tracks at once. Work on the 'vertical' axis — whether pitch, volume, or whatever — is neither convenient nor enjoyable, especially when bending notes. Worst of all, traditional step sequencers perform poorly outside of quadruple time, since you cannot use odd-numbered meters without losing some of your buttons. I started to make a dub track on my EMX-1, but I had to give up because triplets were such a pain.
Syntheogen is an attempt to escape those limitations. Despite the popularity of skeuomorphism, I say that software can and should transcend the limitations of hardware. In software, we can have as many buttons as we want, and those buttons can even move or change shape. At the beginning of this project, I knew I wanted a two-dimensional array of steps that would allow all tracks to be viewed and edited together. Pitched tracks would occupy multiple rows, allowing melodies and chords to be entered straightforwardly. I wanted an easy way to bend notes or entire chords. I wanted a way to divide the grid into different lengths, so that triplets and unusual time signatures could be used. I also wanted to have patterns with different lengths in the same grid, so that polyrhythms could be programmed easily. These are all things I had tried to do with hardware sequencers, but found to be difficult or impossible.
The result of all this was the Syntheogen LOOP STEPS dialog:
Where sequencing is concerned, Syntheogen offers some advanced features that (as far as I know) other Android apps lack; on the other hand, certain 'standard' features like controller automation aren't implemented at all. In some cases, I haven't had time to develop what I want, but so far I've excluded automation intentionally for reasons that relate to my design philosophy for this project.
Generally, I don't like the way automation is implemented in the applications and hardware I've used. The typical approach — where the user places the device in an automation record mode, then manipulates the control in real time — does not satisfy me. Usually there is no way to view the data without playing back and watching the controller, and no way to edit it without recording again. This design hides automation data the same way one-line step sequencers hide pitch and volume data, and that's not what Syntheogen is about.
Unfortunately, I haven't found a better way to implement this. In my work developing 'line of business' software, elegance is not required or typically even noticed. If a feature is requested, I must implement it, one way or another, even if the result is a bit awkward.
But this is a different type of project. I'm not a professional musician, and I don't expect Syntheogen to be used by many who are. My users and I make music for fun, so I think Syntheogen itself should be fun. Therefore, in this app, I would rather omit a feature if I cannot implement it in a fun and direct way. Not everything in Syntheogen meets this standard right now, but that is my goal. This approach will limit the app, in a sense, but I would rather do a few things well than do many things poorly. So, if I find a good way to implement automation, or any other feature, I will add it; otherwise, I plan to stick to the things I can do well.
Syntheogen was written in C++, which I know and like better than any other language. C++ supports object-oriented development, which is important for larger projects, yet it lets the developer work very close to the machine when performance is important. It also supports a resource-management strategy called RAII (Resource Acquisition Is Initialization) that is better and more flexible than the garbage collection offered by Java.
Unfortunately, Android is very much a Java platform, and does not give first-class status to C++ apps. Parts of the platform are represented in the Android NDK (Native Development Kit) with 'native' libraries that provide direct access to Android features from C or C++ code. Most platform features cannot be accessed this way, however; they can only be reached by passing through a layer called JNI (Java Native Interface) that 'translates' function calls to and from the format used by Java. JNI is difficult to use, and somewhat dangerous, as even small mistakes can crash your application. For this reason, many native developers use JNI — and by extension, much of Android — as little as possible.
This issue required that I implement my own window-management and UI control library, since using Android's controls would have required hundreds of JNI calls between the UI and the synthesis engine. Developing a full-featured UI framework is a big task, but it's something I've done before, and by relying on Android as little as possible, I was able to make Syntheogen largely platform-independent. In fact, Syntheogen was mostly developed in Windows, with Visual Studio. Even aside from the UI framework, this created a lot of extra work, but I've developed mobile and embedded applications this way for many years, and it's always turned out to be worth it. In this case, it allowed me to do most of my debugging in Visual Studio, which is fortunate, as the Android NDK debugger is almost unusable.
One regrettable early decision was to use version 1.1 of OpenGL ES rather than version 2.0. Version 1.1 is simpler, and I had used it before, but to do any serious work with OpenGL you really have to use shaders, an advanced technique provided with version 2.0 for filling shapes with images or patterns. Having chosen version 1.1, I was forced to use stencils when clipping patterns to round corners, and that is a poorly-documented, convoluted, and slow solution to an otherwise simple problem.
Another questionable decision was to make the UI layout completely independent of the display aspect ratio, to the extent that black bars are not displayed, yet the images used to render controls are never distorted horizontally or vertically by stretching. I wanted to use as much of the display area as possible, and to render all straight lines with pixel-perfect sharpness, but this complicated the way controls are laid-out and rendered while simultaneously limiting the sort of patterns and gradients I could use to decorate them. A few years ago, when it was possible to see the pixels on an average display, this might have made a noticeable difference, but today it is worthless unless you're using a jeweler’s loupe. I will have to replace a lot of the rendering code before I can improve the application's appearance much further.
The sequencing and synthesis engine presented numerous challenges.
Sequencing is much harder than it looks; in Syntheogen, patterns repeat within loops, loops repeat within songs, and a particular step may be tied on one or both sides to other steps, even steps in other pattern iterations. Simply determining what steps will play in a given span is very difficult, and I'm surprised sometimes that it works at all.
Syntheogen is my first audio application, and I developed the synthesis engine from scratch, so there was a lot of theory to be learned. I studied Dodge and Jerse's 'Computer Music', and I read much of Curtis Roads' 'The Computer Music Tutorial'. The Dodge and Jerse book was useful, but it contains serious gaps that a working synthesis developer must fill elsewhere. It also contains frustrating math errors that were especially aggravating given the book's ridiculous price. The Roads book is very popular, but I found it only occasionally worthwhile. Though it's very large, the book's coverage is surprisingly superficial, and I can't forgive an author who wastes my shelf space with a full-page photograph showing me what a compact disc looks like. I had to go to the KVR Audio forums to find a good general-purpose filter, and to learn more about reverberation algorithms.
There's a rule that warns software developers not to optimize any length of code unless they know with certainty that a bottleneck exists there. Normally I'm careful to honor this rule, but I rarely felt I had that luxury when developing Syntheogen. In most applications, the processor spends much of its time idling while waiting for user input. In a synthesis app, during playback, the processor receives a constant stream of lengthy tasks, and it can idle only if it finishes a given block before receiving the next. My constant worry was that, if I did not optimize everything in the synthesis path, I would end up with an app that perhaps contained no bottlenecks, but was instead everywhere too slow. In a sense, the entire synthesis engine constitutes a bottleneck, simply because of the way it is used. Optimizing so liberally created a lot of code that was difficult to write and remains difficult to read. I've been pleased with Syntheogen's performance, though, so I think that was the right approach.
I'm generally happy with how Syntheogen has turned out. It's interesting to compose a song with a synthesizer you wrote yourself, and to know in detail that everything you hear is the output of some relatively simple math operations.
There is still a great deal of work to be done, though. Some obvious omissions, like chorus and phase effects, must be remedied. Advanced users will want sample editing and MIDI export capabilities. None of these things are especially difficult, just time-consuming.
Now that I've used Syntheogen awhile, I find some tasks to be a bit awkward. When setting up a new loop, the user must create many loop elements and tracks, and I would like a way to automate that. Also, when setting synthesis parameters, there is no way to hear your changes without playing a pattern, which you may not have yet. I would like some way to audition the patch from the TRACK SYNTH and TRACK EFFECTS dialogs, but I'm not sure I want to sacrifice the display area needed for even a small keyboard. These and other issues will be addressed, and naturally, I'll be raising the price as I do so.
I could attempt to produce a full-featured DAW, but I don't think that's a realistic goal for mobile devices, and it's also not what I want to use. What I really want is something like a harmonica. The harmonica can't do everything, but what it does do, it does better than anything else, and it does it in a way that it is compact, durable, inexpensive, easy, and fun. Hopefully, by narrowing my focus, I can approach that standard with Syntheogen.
So what more is there to say than support the developer!!!
The application is good already and will be better-
Have functions that does not exist in other multi track applications and sound wise it comes with quality and implementations you won't find in other Android applications-
All for the price of a coffee.
PDF file of the article:
This man is beyond dedication to the application Caustic. It seems to me that the passion he feels for Caustic is close to what many would reserve for a human being.
He makes intricate sample packs for Caustic and I will let him explain it himself and for our enjoyment he has given us a sample pack to try and there is some packs for free at the Playstore- links at the end of the post.
In the end he is ohh so slightly mentioning CausticCore-
Well the people that are involved and as I understand it, it is around three...or more...
I don't know!!! They are so goddamn secretive about it!!!
It is a project that have the blessing of the Developer of Caustic but as I understand
with a 90 % certainty that he is not directly involved in it.
(Yes I know I should ask but want to post this now and am too lazy to wait for answers.
YES irresponsible journalism is my forte.)
All that I can gather from following threads at the Caustic forum that it is to take the core of Caustic and build it into a Monster of functionality...
Seen some pictures and heard angels whisper unholy rumours
when walking through cloudy moonlit night-
colour me intrigued!
So here we go!
In his own words JBLANN!
I bumped into the Caustic 2 App by Single Cell Software, about a year and a half ago, by mistake actually. And at first, I thought that it wouldn't really amount to much, and nearly dumped it... But as I poked into it a little further, I began to realize its sheer potential in its capability to let me create music in a new, refreshing way that I never thought could be possible.
Now, we are pushing this app beyond its boundries with new revolutionary techniques and crazy ideas, that really puts Caustic above the rest, in regards to being able to efficiently and quickly lay down a groove with minimal effort; thus, the Builderz Project is born.
What, exactly, is the Builderz Project for Caustic? It is a concept and structure, that takes the limits of this app on a mobile device, and expands your creativity to new territories you never thought you could imagine.
In this current version build, the Caustic 2.1 App gives you six Synth Machines to create your tracks; but the Builderz Project, takes those six Machines, and exapnds them to twelve or even 18 virtual machines and beyond. Taking MultiSamples to a new level, allows you to be able to load up Presets, that feature more than just a single sound or tone element; now giving you two or three or more elements in a single preset. For example, one particular Preset may have a atmospheric pad type sound, with a lead sound, and an array of sound effects, all mapped out on the keyboard, all crammed into one Synth Machine.
On top of multi-plexing many different sound elements on a single Synth Machine, I also employ what I call SBT -- Sonic Balance Technique. This goes along with a personal theory of mine, that I find to be a flaw in most synthesizers and samplers, that plaque musicians and studio engineers trying to incorporate their sounds and performance in the mix, and keep it clean and balanced. Certain sounds may sound harsh or shrill or just "too loud" as you play up higher notes on a keyboard; or they may start to get dull and muddy and disappear as you play lower notes on the keyboard. This creates an imbalance of tone, that I believe causes alot of frustration for the musician, or song writer, or audio engineer trying to get a good solid clean mix, without incorporating an array of rather expensive tools and hardware to get the Sonic Balance to ride well in the final mix.
EIPStudiosOhio is the first to tackle this problem at its roots, as we create our presets exclusively for Caustic 2 and future CausticCore projects and apps. This is a painstaking multi-step process that goes beyond what most other content providers put together for their samplepacks, which usually will only include one, maybe two or even three multisamples for each preset... that's it. EIPSO Content Packs are created in a fun, crazy, and complex, yet maintaining SBT, so that YOU, the musician, can simply load it up, and get right to making music right away -- not spending hours in sheer frustration tweaking EQ's and Levels and effects and filters, trying to get your sound to fit in the mix somewhere.
CausticCore is the engine of the Caustic App and future projects that will be slowly making their way to the surface from the depths of the sea of imagination from many minds that are involved in it from the SingleCellSoftware Forum Community. I am proud to be a part of the Project, and very proud to be part of the most unique group of people on the SCS Community Forum, I have ever had a pleasure to know.
It's only going to get better from here... as more people discover, and start to get hooked on the Finest Pocket Music Workstation on the Planet.
Are You a Caustic Warrior?
Jason M Blann EIPStudiosOhio
So here is two different files for download and they are full of Builderz pack demos-
One file is in the form of an APK and you just install it as you would with any non market application and it will find its way to your Caustic folder- The second is the Presets that you place manually in the folder for PCM Synths and so you can use them in your PC-
Yes hope that you are aware that there is a PC version for free download at the Caustic site-
So you can produce on your computer too! With the added advantage of transferring files between device to PC / PC to device.
Playstore link for the Builderz packs
EIPStudios YouTube Channel
EIPStudios Soundcloud channel
Of Course there has to be some music too!
Please make a donation to