To read more go to:
This will make sense to programmers.
If I understand it is a library to implement Pure Data using libpd and publish it on Android easier?
In either case go here and check it out:
Csounds being a kind of programming language for sounds have been used for making some applications for Android that sounds good like Etherpad or interesting like Moon Synth and Psychoflute for example. With this new Android app it makes it possible to use Csound without compiling it to a APK and making an interface which must be interesting for people that already use Csound or for you Brainiac tinkerers out there!
To read the post on Palm Sounds go here:
To read more about Csounds go here:
A podcast talking with the main man responsible for Midi in Android 6
From the Blogspot:
This time, Tor and Chet get all musical with Phil Burk from the Android Audio team. Phil worked on the new MIDI feature in the Android 6.0 Marshmallow release, and joins the podcast to talk about MIDI (history as well as Android implementation), electronic music, and other audio-related topics.
Bryan said it was his favorite episode so far. But then Bryan's an audio engineer, so he might be slightly biased.
Android MIDI: It's music to our ears.
Subscribe to the podcast feed or download the audio file directly.
The most exciting audio news is that midi will be easier for developers to implement.
So maybe some of the standalone synthesizer apps will turn midi on Marshmallow as there is some decent ones that is without midi or sequencer. But to be honest some of the best apps have already implemented midi with their own solutions...
The Audio will also be of higher quality...
To read more about Marshmallow go here:
and here is a list of the devices that will be updated so far:
Here is what is said on the website:
Develop a sweet spot for Marshmallow: Official Android 6.0 SDK & Final M Preview
By Jamal Eason, Product Manager, Android
Android 6.0 Marshmallow
Whether you like them straight out of the bag, roasted to a golden brown exterior with a molten center, or in fluff form, who doesn’t like marshmallows? We definitely like them! Since the launch of the M Developer Preview at Google I/O in May, we’ve enjoyed all of your participation and feedback. Today with the final Developer Preview update, we're introducing the official Android 6.0 SDK and opening Google Play for publishing your apps that target the new API level 23 in Android Marshmallow.
Get your apps ready for Android MarshmallowThe final Android 6.0 SDK is now available to download via the SDK Manager in Android Studio. With the Android 6.0 SDK you have access to the final Android APIs and the latest build tools so that you can target API 23. Once you have downloaded the Android 6.0 SDK into Android Studio, update your app project compileSdkVersion to 23 and you are ready to test your app with the new platform. You can also update your app to targetSdkVersion to 23 test out API 23 specific features like auto-backup and app permissions.
Along with the Android 6.0 SDK, we also updated the Android Support Library to v23. The new Android Support library makes it easier to integrate many of the new platform APIs, such as permissions and fingerprint support, in a backwards-compatible manner. This release contains a number of new support libraries including: customtabs, percent, recommendation, preference-v7, preference-v14, and preference-leanback-v17.
Check your App PermissionsAlong with the new platform features like fingerprint support and Doze power saving mode, Android Marshmallow features anew permissions model that streamlines the app install and update process. To give users this flexibility and to make sure your app behaves as expected when an Android Marshmallow user disables a specific permission, it’s important that you update your app to target API 23, and test the app thoroughly with Android Marshmallow users.
How to Get the UpdateThe Android emulator system images and developer preview system images have been updated for supported Nexus devices (Nexus 5, Nexus 6, Nexus 9 & Nexus Player) to help with your testing. You can download the device system images from the developer preview site. Also, similar to the previous developer update, supported Nexus devices will receive an Over-the-Air (OTA) update over the next couple days.
Although the Android 6.0 SDK is final, the devices system images are still developer preview versions. The preview images are near final but they are not intended for consumer use. Remember that when Android 6.0 Marshmallow launches to the public later this fall, you'll need to manually re-flash your device to a factory image to continue to receive consumer OTA updates for your Nexus device.
What is NewCompared to the previous developer preview update, you will find this final API update fairly incremental. You can check out all the API differences here, but a few of the changes since the last developer update include:
To make sure that your updated app runs well on Android Marshmallow and older versions, we recommend that you use Google Play’s newly improved beta testing feature to get early feedback, then do a staged rollout as you release the new version to all users.
Google had their big yearly talks and in question of Android there is a new version coming that is still in preview. Not too much audio news...
In this talk they skip over it fast and it starts around the 31 minute mark.
Mostly the midi implementation will be easier and there will be general higher audio quality.
Boringly enough as the more serious apps already have implemented their own midi solutions (some much better than others!) and some have already gone to the same audio quality...
Well maybe the midi is good news as there is good applications that have inferior midi...
Another thing that seems to be coming but is not mentioned in the video and that I picked up from arstechnica.com is a new USB type-C that will allow for direct connection of USB microphones etc which would be great as there is solutions but they do cost money and involve specific devices and external hardware...
No mention of latency.
Guess it does not cost anything as a developer to take a look what is promised in this Audio SDK? It is in some circumstances for free to use-
We draw a distinction between “app store” apps and embedded applications. For the former, Superpowered is free. If you are building the latter, please contact us.
Read more: http://superpowered.com/license/#ixzz3ZC1RbRkf
This is the homepage of Superpowered-
This is the short explanation-
Package Android APKs for ARC (App Runtime for Chrome)
Introducing ARC Welder -- a tool to help you test and publish your Android Apps to Chrome OS using the App Runtime for Chrome (beta), ARC.
And to read more:
Soon Android will be on everything independent what Operating System you use!
Even most of the modern Tv Sets comes with Android built in...
This is the first article in a series where developers talk about different issues regarding making applications for Android. To start with andreas the developer of G-Stomper Studio, VA-Beast and G-Stomper Rhythm will talk about latency as this is something that concerns many music makers that use Android.
If you are not a programmer do not be scared to read it as it is also understandable for non-programmers!
So thank you so much Andreas for taking the time to write and share with us all!
Latency in audio applications is probably one of the most discussed and also one of the most annoying issues on the Android platform. Understanding and handling latency the right way can be a mighty jungle, especially if you’re a “normal” developer, and not a scientist.
This article is focused on output latency on Android devices, not input or round-trip latency. Hopefully someday I’ll be able to write about input latency as well, but so far input and round-trip was no issue in my applications. So the term latency in this article is always meant as output latency. Also please forgive me when I forget some scientific details. It is neither my goal nor am I able to write a scientific paper about latency on Android. What you read is my personal experience with the different aspects of output latency on the Android platform.
Output Latency, what is it?
In short, output latency is the time from the moment you press a button or a piano key until you hear the sound from the speakers. And, output latency in audio applications is something we all want to get rid of.
The complete output latency in a musical application, which includes live playing, is a combination of the following 3 main factors:
1. Control Input Latency (e.g. display reaction time)
2. Application Latency (everything that happens in the app layer)
3. Audio System Latency (everything that happens in the system layer)
Control Input Latency (e.g. display reaction time)
The Control Input latency is the time from the moment you touch the screen (or an external MIDI Keyboard) until the audio system gets notified by the Android OS to do something. It is influenced by various factors, which strongly depend on your device and Android version. It can vary from a few milliseconds up to 300ms or even more. The Control Input Latency is under full control of the Android OS and the underlying hardware. There’s no way to optimize or measure it from inside an app. But can get rid of a good part of it by using a MIDI controller/keyboard. The reaction time of an external MIDI keyboard is usually around 30-40ms faster than the on screen controls. This may surprise you, but the undisputed king regarding display reaction time is still the Google/Samsung Galaxy Nexus (2011).
Audio Output Latency (everything after the Control Input Latency)
The Audio Output Latency is the time from the moment when an application starts to play a sound until you hear it from the speakers. The Audio Output Latency is hardware, operating system and app dependent. A good part of it can be optimized from inside the app (as long as the hardware and operating system allows it). The Audio Output Latency can vary from ~35ms up to over 250ms. Sure, there are apps that report latencies down to 10ms, but this is not the complete thing (more about this later).
Application Latency (everything that happens in the app layer)
“Application Latency” is not an official term. I call it that way that way because it happens in the main application, the audio app. Meant is the time from the moment when an application starts to play a sound (technically when it starts to fill an audio buffer) until it is passed (enqueued) to the underlying audio system (AudioTrack or OpenSLES). This part is under direct control of the audio application. It depends on the defined audio system main buffer size and the app internal buffering.
AudioTrack is the out of the box system, which guarantees to run stable on every Android device.
It is not thought to be used in real time audio applications, but since it’s the one and only ready-to-use system, it is used in most audio apps. AudioTrack has a device dependent minBufferSize which can be obtained by invoking AudioTrack.getMinBufferSize(). In short, AudioTrack has the full control over the minBufferSize as well as over the way the buffers are handled (once a buffer is passed to the AudioTrack system). The lowest ever reported minBufferSize by AudioTrack comes from the Google/Samsung Galaxy Nexus (2011) and corresponds to an application latency of 39ms at a sample rate of 44100Hz. More likely on modern non-Nexus devices are minBufferSizes around 80ms. Using smaller buffers with AudioTrack than the reported minBufferSize usually results in an initialization error.
The native OpenSLES system on the other hand allows more control. The buffer size as well as the way the buffers are handled is under responsibility of the app developer. OpenSLES allows smaller buffers than AudioTrack, of course only as long as a device can handle it. The smallest well working OpenSLES buffer size in the G-Stomper environment corresponds to an application latency of 10ms on Android 5.x and 20ms on Android 4.4 (both with a Nexus 9).
The application latency can be calculated with a simple formula:
= audioTrackByteBufferSize * 1000 / sampleRateHz / bytesPerSample / numChannels
= internalFloatBufferSize * 1000 / sampleRateHz
Now take the max of these two values and have the Application Latency.
On the Android platform, this value can vary from ~10ms up to ~200ms.
Audio System Latency (everything that happens in the system layer)
One of the biggest mistakes regarding output latency is the fact that most apps on report only the Application Latency. This looks of course nice (e.g. Nexus 7 2013/AudioTrack: 40ms), but it is only half the truth.
The moment a buffer is passed to AudioTrack for example does actually only mean that the buffer was enqueued to the AudioTrack internal buffer queue. But you never know exactly how much time will pass before the buffer will actually come out as a sound from the speakers. The time from the moment when a buffer is passed to the audio system until you actually hear it from the speakers, is what I call the “Audio System Latency”.
The Audio System Latency comes in addition to the Application Latency and strongly depends on the audio system internal buffer pipeline (buffer queue, resampling, D/A conversion, etc.). Regarding low latency, this is the most significant part of the latency chain, which reveals the obvious problem of AudioTrack. With AudioTrack, you don’t have any control over its internal buffer pipeline, and there’s no way to force a buffer to pass it more quickly. What you can do is to prepare the buffers as final as possible, e.g. do the resampling in the audio application and pass the buffers always at the systems native sample rate. Unfortunately this does not change the latency, but it avoids glitches due to Android internal resampling.
I’ve measured Audio System Latencies of over two times more than the Application Latency. In other words, if the Application Latency is 80ms, it can easily be that the full output latency is more than 240ms, which is ridiculous for a real time application.
What did Samsung in their Professional Audio SDK to achieve such low latencies?
I’m no scientist, but it’s quite obvious that they reduced the audio pipeline (application to speaker) to a minimum, and they did a very good job with impressive results. Unfortunately the SDK is for Samsung devices only, but it’s for sure a great pioneer work, and maybe it’ll motivate others to catch up. There’s a nice video presentation of the Samsung Professional Audio SDK on YouTube: https://www.youtube.com/watch?v=7r455edqQFM
For (supported) Samsung devices, it’s definitely a good thing to consider the integration of their SDK.
What can you do as an app developer to get a faster audio pipeline?
Go native! Using the native OpenSLES reduces the Audio System Latency significantly. Even if you work with the same buffer size as with AudioTrack, you’ll notice a big difference, especially on newer Android versions.
Using OpenSLES does not implicitly mean “low latency”, but it definitely allows lower latencies than AudioTrack, because all audio buffers are written directly to the audio hardware, without the AudioTrack API and Dalvik/ART runtime overhead. This means the audio pipeline is shorter and therefore faster.
“The Audio Programming Blog” provides good tutorials regarding the OpenSLES integration:
Also helpful is this article on GitHub:
The “Google I/O 2013 - High Performance Audio” presentation gives a good overview about low latency audio on Android in general.
Will the G-Stomper apps get OpenSLES support?
Yes, definitely. Actually, OpenSLES is already integrated as an experimental additional audio system. In the current version 4.0.4, it is exclusively available for Nexus devices. The upcoming version 4.0.5 will introduce the OpenSLES to all 1ghz Quad-Core (or faster) devices running on Android 4.2 (or higher). The default will still be AudioTrack, but users with supported devices will get a notification and will be able to manually switch to OpenSLES in the G-Stomper setup (Setup dialog / Audio / Audio System / OpenSL).
How can the full output latency get measured?
Unfortunately there’s no proper way to automatically calculate the full output latency (Control Input Latency + Application Latency + Audio System Latency) from inside an app. The only way to get real numbers is to measure it.
There’s an article on android.com, which shows a way to measure the full audio output latency (Application Latency + Audio System Latency) in use of an oscilloscope and the device’s LED indicator:
But honestly, by far not everyone has that equipment.
Here’s a simple way to measure full output latency:
The only things you need are a microphone, a PC with a graphical audio editor installed, and an Android device. While recording on the PC, hold the microphone close to the screen, and tap some button or piano key on the Android screen, which is supposed to play in a sound. Be sure to tap the screen hard enough, so that the tap is audible. The microphone will record both, the physical finger tap and the audio output. Then, in the audio editor on the PC, measure the gap between the two peaks (finger tap and audio output).
Be sure to make more than one recording and take the average of the measured times. Especially the display reaction time may vary over multiple taps.
There might also be a quite significant difference between the head phone jack (with an external speaker connected) and the device internal speakers. Using the device internal speakers may result in higher latencies because of the post processing, which is usually done for internal (low quality) speakers.
This is definitely not the most scientific and also not the most precise approach, but it’s precise enough to give you an idea of the real output latency. You’ll be surprised by the results.
Hey so if you use Pure Data this is a must as it allows the creation of patches directly on your device in a very elegant way.
If you do not use pure data maybe this is the excuse you need to start learning it as with this app you can make patches on the go or just take advantage of patches that has been made already by others.
It is "restricted" to Pure Data Vanilla meaning that extensions to Pure Data made by users are not allowed but it is hardly restrictive as almost anything that can be thought of as a music app is covered.
Really recommend for people to check out Pure Data and see if it is not worth the effort as it is for free and very deep if you want it to be. In a sense this and MobMuPlat is maybe the most advanced music apps available for Android! As even without any skill in doing patches there is so much made already.
Have not learned it in any kind of depth but was studying some this summer and even though it is not a stroll in the park it is not very hard either if you get into the concept ( talking as a non programmer ) and it has the added bonus of actually learning the basics of other programming languages as the principles are the same except that Pure Data is exclusively made for musical
( and some graphic ) intentions and is easier to use for people that are more visually inclined as it is not writing code but the code is in the form of boxes interconnected so you can see the relationship between the different programming commands which can be hard with written code...
It is for free and works as is visible in the picture below- All the different commands are on the right side and it is just to drag them out and place them and make your patches.
So now there is two excellent apps for Pure Data usage!
MobMuPLat that allows already made patches to be turned into Android apps with an interface and this one where you can make the patches on the go, or load patches and use without a nice GUI but this can be loaded into the free desktop part of MobMuPlat and then add a fancier GUI?
In either case it is amazing that this possibility exists!
There is a lot of tutorials and books etc how to use Pure Data that is available through the Pure Data website or Youtube etc.
Pure Data website:
Mr Nightradio's own programming language that he used to create:
Virtual ANS spectral synthesizer
PixiTracker - simple and fun sample based tracker with minimalistic pixel interface
PixiVisor - revolutionary tool for audio-visual experiments
Soul Resonance - audio-visual album
PixelWave and SpectrumGen synths
Northern Forests and Dragon's Game videos
Tangerine Birthro 2012
It has also been mentioned some time ago that Pixilang would be built into SunVox directly and we can only hope that it happens sooner or later...
Here is what is included in the update:
Palm Sounds also mentioned this update with some personal reflections that can be read here:
Play Store link:
For other versions:
Okay so Google is starting to roll out their new Android version and it seems that there is actually things happening with the latency!!! Plus a lot of other improvements when it comes to Audio. Sorry to say you have to suffer through a lot of silliness...
Last week got my study on, leaving my family to fend for themselves
while I took some time to study and break the initial inertia when it came to programming.
So I have taken my first stumbling steps learning the programming language Pure Data.
The way I see it is if you want to do music applications for Android the easiest and most professional way is through learning the language Pure Data.
It is not easy (at least not for me) but have broken through the veil slightly and now it starting to make sense and it is becoming fun as finally I can follow more or less what is going on. That is Pure Data, another programming language that you can use for making Android apps with is Processing this one being made more for Visual programming.
When I started to look into programming and trying to learn it about a year ago it was very hard for somebody that do not know anything at all. It was mostly hard to find a way to learn that took you from absolute beginners state in a way that was not too frustrating.
I tried one for a little bit called Python because it was one of the few where I found instructions that seemed aimed at total beginners but it was too abstract in the end (for me) and the next one I looked into was Processing this have been so far the easiest to learn because there is a lot of tutorials that assumes you do not know anything and even better it does not explain a lot of the technical details but eases you into it by doing exercises that actually does things that is fun and make you feel that you can use it directly instead of having to learn very abstract concepts before the fun starts.
Anyway this was a long introduction to what I wanted to say...
There is a free course over the internet through Monash University starting the second of June. I did sign up as I never got to far in my study of Processing and as a programming language it is also works a s good companion with Pure Data. Many applications use Pure Data for the audio but as it has limited graphical possibilities it many times gets combined with Processing.
The whole idea with Processing is to cater to Creative people that wants to learn programming to use for art in different ways and it is pretty clear from the trailer of the course that this is the route they want to take here as well.
So if you are interested in signing up for classes:
It is for six weeks and three hours per week.
I am pretty excited as being a better visual artist than musician / producer there is a lot of ideas that I want to realize for some art projects...
Plus when get deeper into Pure Data it is a good thing to be able to combine them two!
Peter Brinkmann is a man that works for Google and have a keen interest and have worked hard to get the visual audio programming language Pure Data (for free and very interesting) into the Android world building on something called libPD that can also be used to get Pure Data into other systems. LibPD is also usable in conjunction with the programming language Processing (The Two Big Ears developers have implemented this in their Android application Circle and written about the process). His latest news is of course Patchfield that can make audio apps interact with each other inside of Android. Sorry to say that since he released Patchfield there seems that there has not been too much interest from developers and so it is hard to get the ball rolling on Patchfield. So if you are a developer maybe take a look again at Patchfield or if you never heard about it take a look at the video that is part of the interview and go to Github and get involved!
It is open source god damn it!
The interview is from September 2013 but is an interesting read for anyone even slightly interested in developing Audio applications for Android.
So here is some links:
Two Big Ears writing about their work with LibPD / Processing and making their app Circle:
Two Big Ears the developers of Circle Synth for Android have now made it possible according to their website:
A good binaural system makes it possible to hear a sound from any point in space over any pair of headphones in real-time.
You know the problems — CPU hungry algorithms,no cross-platform support, compromised soundquality and implementation difficulties.
3Dception is a real-time binaural engine that is cross platform and easy to implement. In just a few minutes your game, app, virtual reality or augmented experience can sound as realistic and awesome as it looks.
* Efficient room modelling for enhanced spatialisation* Desktop and mobile support* Easy to integrate into existing workflows* No additional hardware or expensive headphones* High resolution HRTFs
With over a 50% reduction in CPU usage compared to other solutions, 3Dception gets the job done with no compromises.
It is still not ready for Android but will be soon and it will also be available for use with LibPD and Pure Data and many other platforms. For free to use in non commercial projects...
In either case if this sounds interesting for you as a developer go here:
Two Big Ears the developers of the nice application Circle Synth ( For which they used the Pure Data and Processing programming languages ) have written a new article about using Pure Data and the Unity languages for Android which could be an interesting read for programmers.
To read go here:
Please make a donation to