This is what is coming for the next Android when it comes to audio and it looks like finally latency issues will be gone for most devices. Which it already is for some devices.
To read it all go here specially if you are a developer:
This is as reported by techchrunch.com:
The vast majority of Android developers use Google’s own Android Studio integrated development environment (IDE). Google offered a first look at what Android Studio 3.0 will look like. Most of these upcoming features are now available in the Android Studio early release channel.
As Google director of product management Stephanie Saad Cuthbertson told me, the company has only been increasing its investment in Android Studio since its launch four years ago. The theme of this release is “speed and smarts,” Cuthbertson noted. That means the IDE itself now allows developers to write their applications faster and it helps developers to better profile their applications to make them faster, too.
In practice, that means that Android Studio now has a full suite of profiling tools that help developers better diagnose performance issues in their apps. This allows developers to profile CPU, memory and network activity for their apps and see the details for all of these on a timeline that is automatically mapped to touch events, key presses and other events in the applications.
Another area Google focused on was to speed up the build time for complex projects. With this release, Gradle build speeds are significantly faster. The team tested this with projects that use more than a hundred modules, and Cuthbertson noted that for these highly complex projects, build times can go from minutes to seconds.
Android Studio 3.0 will also include a feature that will make it easier to debug any APK, no matter whether it was written in Java with Android Studio or with another tool or in a language like C++ (APK is the format Google uses for packaging Android apps). It also features an improved APK analyzer that helps developers optimize the size of their APKs, for example, by reducing the file size of images and other compressible assets, among other things.
With this release, Android Studio now also offers support for Instant Apps, Google’s format for allowing developers to break their applications into smaller pieces that can run individually and be loaded in seconds, right from its search results pages. To do this, developers have to break their applications into different modules — and Android Studio now supports this.
It’s worth noting that until now, access to Instant Apps — and distribution of their apps — was only available to a small number of developers. This project is now open to all developers.
Other new features in this release include support for Java 8 language features and APIs, an improved layout editor, support for adaptive icons in Android O and XML fonts and downloadable fonts, support for Android Things, updated system images for Android O (which is now in beta), Google Play store support in the emulator and support for Android Wear rotary controls in emulator.
Google also today announced that it is making Kotlin a first-class language for writing Android apps. This first preview of Android Studio 3.0 already includes full support for this new language.
From the Kotlin Blog:
Today, at the Google I/O keynote, the Android team announced first-class support for Kotlin. We believe this is a great step for Kotlin, and fantastic news for Android developers as well as the rest of our community. We’re thrilled with the opportunities this opens up.
For Android developers, Kotlin support is a chance to use a modern and powerful language, helping solve common headaches such as runtime exceptions and source code verbosity. Kotlin is easy to get started with and can be gradually introduced into existing projects, which means that your existing skills and technology investments are preserved.
Starting now, Android Studio 3.0 ships with Kotlin out of the box, meaning Android developers no longer need to install any extras or worry about compatibility. It also means that moving forward, you can rest assured that both JetBrains and Google will be supporting Android development in Kotlin.
In case you are concerned about other platforms that Kotlin supports (Kotlin/JVM for server and desktop, Kotlin/JS and Kotlin/Native), please be sure that they are as important for us as ever. Our vision here is to make Kotlin a uniform tool for end-to-end development of various applications bridging multiple platforms with the same language. This includes full-stack web applications, Android and iOS clients, embedded/IoT and much more.
Programming languages are just like human ones: the more people speak a language, the better. First-class support on Android will likely bring more users to Kotlin, and we expect the community to grow significantly. This means more libraries and tools developed in/for Kotlin, more experience shared, more Kotlin job offerings, more learning materials published, and so on. We are excited to see the Kotlin ecosystem flourish!
We will be partnering with Google to create a non-profit foundation for Kotlin. Language development will continue to be sponsored by JetBrains, and the Kotlin team (over 40 people and second largest team at the company) will operate as usual. Andrey Breslav remains the Lead Language Designer, and Kotlin will be developed under the same principles as before. We’ll keep our design processes open, because your feedback is critical for us in moving Kotlin in the right direction.
If you’re at Google I/O, make sure you stop by one of the Kotlin talks on the schedule. And of course, don’t forget to register for KotlinConf in San Francisco in November. It will be an amazing event!
A Big Thank You!When we started the journey with Kotlin over 6 years ago, we aimed at creating a language that would be in line with the same principles that drive our tools – create something that helps developers with the tedious and mundane tasks, allowing them to focus on what’s truly important. And of course make the process as enjoyable and fun as possible.
We want to thank Google and the Android team for their trust in Kotlin, but above all we want to thank you, our community, our users. Without you Kotlin wouldn’t be where it is today. Thank you for accompanying us during this journey and we hope you join us for the exciting road ahead.
Frequently Asked QuestionsWe’ve prepared answers to a series of questions that you may have in regard to this announcement. If your question is not covered, please feel free to ask us in the comments. If you are new to Kotlin, make sure you check out the FAQ on the web site where you can learn more about the basics.
Is Kotlin going to become primarily focused on Android?One of Kotlin’s goals is to be a language that is available on multiple platforms and this will always be the case. We’ll keep supporting and actively developing Kotlin/JVM (server-side, desktop and other types of applications), and Kotlin/JS. We are working on Kotlin/Native for other platforms such as macOS, iOS and IoT/embedded systems.
How does this impact Kotlin’s release cycles?Kotlin will continue to have its own independent release cycle from that of Android or Android Studio. The projects remain completely independent. Obviously there will be close collaboration between the product teams to make sure that Kotlin is always working correctly in Android Studio.
Who’s going to work on the Android Studio plugin?JetBrains will continue to work on the Android Studio plugin, collaborating closely with the Android Studio team.
Will this affect the support for IntelliJ IDEA, Eclipse or Netbeans?No. Kotlin continues to be a language that targets multiple platforms and support for other IDE’s will continue to be provided as before. Obviously emphasis will be placed on IntelliJ IDEA with hopefully community contributions on the other ones.
Will this affect support for macOS or iOS?No. We still have plans to support both of these systems with Kotlin/Native and nothing has changed in this regard.
Is JetBrains going to be acquired by Google?No. JetBrains has no plans of being acquired by any company. JetBrains is and continues to be an independent tool vendor catering to developers regardless of their platform or language of choice.
So there have already been low latency for some devices and with some apps already but very soon all newer devices will be low latency to satisfy anyone that wants to play things live and direct...
AAudio is a new Android C API introduced in the Android O release. It is designed for high-performance audio applications that require low latency. Apps communicate with AAudio by reading and writing data to streams.
To read the whole thing in glorious detail:
They do promise a professional audio solution with this version. Even though it seems to me that things is working really well on some devices already. But this will be more generally on all devices that will use Android O (what sweet starts with o?).
Here is what they say about audio:
AAudio API for Pro Audio: AAudio is a new native API that's designed specifically for apps that require high-performance, low-latency audio. Apps using AAudio read and write data via streams. In the Developer Preview we're releasing an early version of this new API to get your feedback.
To read the whole thing:
To read more go to:
This will make sense to programmers.
If I understand it is a library to implement Pure Data using libpd and publish it on Android easier?
In either case go here and check it out:
Csounds being a kind of programming language for sounds have been used for making some applications for Android that sounds good like Etherpad or interesting like Moon Synth and Psychoflute for example. With this new Android app it makes it possible to use Csound without compiling it to a APK and making an interface which must be interesting for people that already use Csound or for you Brainiac tinkerers out there!
To read the post on Palm Sounds go here:
To read more about Csounds go here:
A podcast talking with the main man responsible for Midi in Android 6
From the Blogspot:
This time, Tor and Chet get all musical with Phil Burk from the Android Audio team. Phil worked on the new MIDI feature in the Android 6.0 Marshmallow release, and joins the podcast to talk about MIDI (history as well as Android implementation), electronic music, and other audio-related topics.
Bryan said it was his favorite episode so far. But then Bryan's an audio engineer, so he might be slightly biased.
Android MIDI: It's music to our ears.
Subscribe to the podcast feed or download the audio file directly.
The most exciting audio news is that midi will be easier for developers to implement.
So maybe some of the standalone synthesizer apps will turn midi on Marshmallow as there is some decent ones that is without midi or sequencer. But to be honest some of the best apps have already implemented midi with their own solutions...
The Audio will also be of higher quality...
To read more about Marshmallow go here:
and here is a list of the devices that will be updated so far:
Here is what is said on the website:
Develop a sweet spot for Marshmallow: Official Android 6.0 SDK & Final M Preview
By Jamal Eason, Product Manager, Android
Android 6.0 Marshmallow
Whether you like them straight out of the bag, roasted to a golden brown exterior with a molten center, or in fluff form, who doesn’t like marshmallows? We definitely like them! Since the launch of the M Developer Preview at Google I/O in May, we’ve enjoyed all of your participation and feedback. Today with the final Developer Preview update, we're introducing the official Android 6.0 SDK and opening Google Play for publishing your apps that target the new API level 23 in Android Marshmallow.
Get your apps ready for Android MarshmallowThe final Android 6.0 SDK is now available to download via the SDK Manager in Android Studio. With the Android 6.0 SDK you have access to the final Android APIs and the latest build tools so that you can target API 23. Once you have downloaded the Android 6.0 SDK into Android Studio, update your app project compileSdkVersion to 23 and you are ready to test your app with the new platform. You can also update your app to targetSdkVersion to 23 test out API 23 specific features like auto-backup and app permissions.
Along with the Android 6.0 SDK, we also updated the Android Support Library to v23. The new Android Support library makes it easier to integrate many of the new platform APIs, such as permissions and fingerprint support, in a backwards-compatible manner. This release contains a number of new support libraries including: customtabs, percent, recommendation, preference-v7, preference-v14, and preference-leanback-v17.
Check your App PermissionsAlong with the new platform features like fingerprint support and Doze power saving mode, Android Marshmallow features anew permissions model that streamlines the app install and update process. To give users this flexibility and to make sure your app behaves as expected when an Android Marshmallow user disables a specific permission, it’s important that you update your app to target API 23, and test the app thoroughly with Android Marshmallow users.
How to Get the UpdateThe Android emulator system images and developer preview system images have been updated for supported Nexus devices (Nexus 5, Nexus 6, Nexus 9 & Nexus Player) to help with your testing. You can download the device system images from the developer preview site. Also, similar to the previous developer update, supported Nexus devices will receive an Over-the-Air (OTA) update over the next couple days.
Although the Android 6.0 SDK is final, the devices system images are still developer preview versions. The preview images are near final but they are not intended for consumer use. Remember that when Android 6.0 Marshmallow launches to the public later this fall, you'll need to manually re-flash your device to a factory image to continue to receive consumer OTA updates for your Nexus device.
What is NewCompared to the previous developer preview update, you will find this final API update fairly incremental. You can check out all the API differences here, but a few of the changes since the last developer update include:
To make sure that your updated app runs well on Android Marshmallow and older versions, we recommend that you use Google Play’s newly improved beta testing feature to get early feedback, then do a staged rollout as you release the new version to all users.
Google had their big yearly talks and in question of Android there is a new version coming that is still in preview. Not too much audio news...
In this talk they skip over it fast and it starts around the 31 minute mark.
Mostly the midi implementation will be easier and there will be general higher audio quality.
Boringly enough as the more serious apps already have implemented their own midi solutions (some much better than others!) and some have already gone to the same audio quality...
Well maybe the midi is good news as there is good applications that have inferior midi...
Another thing that seems to be coming but is not mentioned in the video and that I picked up from arstechnica.com is a new USB type-C that will allow for direct connection of USB microphones etc which would be great as there is solutions but they do cost money and involve specific devices and external hardware...
No mention of latency.
Guess it does not cost anything as a developer to take a look what is promised in this Audio SDK? It is in some circumstances for free to use-
We draw a distinction between “app store” apps and embedded applications. For the former, Superpowered is free. If you are building the latter, please contact us.
Read more: http://superpowered.com/license/#ixzz3ZC1RbRkf
This is the homepage of Superpowered-
This is the short explanation-
Package Android APKs for ARC (App Runtime for Chrome)
Introducing ARC Welder -- a tool to help you test and publish your Android Apps to Chrome OS using the App Runtime for Chrome (beta), ARC.
And to read more:
Soon Android will be on everything independent what Operating System you use!
Even most of the modern Tv Sets comes with Android built in...
This is the first article in a series where developers talk about different issues regarding making applications for Android. To start with andreas the developer of G-Stomper Studio, VA-Beast and G-Stomper Rhythm will talk about latency as this is something that concerns many music makers that use Android.
If you are not a programmer do not be scared to read it as it is also understandable for non-programmers!
So thank you so much Andreas for taking the time to write and share with us all!
Latency in audio applications is probably one of the most discussed and also one of the most annoying issues on the Android platform. Understanding and handling latency the right way can be a mighty jungle, especially if you’re a “normal” developer, and not a scientist.
This article is focused on output latency on Android devices, not input or round-trip latency. Hopefully someday I’ll be able to write about input latency as well, but so far input and round-trip was no issue in my applications. So the term latency in this article is always meant as output latency. Also please forgive me when I forget some scientific details. It is neither my goal nor am I able to write a scientific paper about latency on Android. What you read is my personal experience with the different aspects of output latency on the Android platform.
Output Latency, what is it?
In short, output latency is the time from the moment you press a button or a piano key until you hear the sound from the speakers. And, output latency in audio applications is something we all want to get rid of.
The complete output latency in a musical application, which includes live playing, is a combination of the following 3 main factors:
1. Control Input Latency (e.g. display reaction time)
2. Application Latency (everything that happens in the app layer)
3. Audio System Latency (everything that happens in the system layer)
Control Input Latency (e.g. display reaction time)
The Control Input latency is the time from the moment you touch the screen (or an external MIDI Keyboard) until the audio system gets notified by the Android OS to do something. It is influenced by various factors, which strongly depend on your device and Android version. It can vary from a few milliseconds up to 300ms or even more. The Control Input Latency is under full control of the Android OS and the underlying hardware. There’s no way to optimize or measure it from inside an app. But can get rid of a good part of it by using a MIDI controller/keyboard. The reaction time of an external MIDI keyboard is usually around 30-40ms faster than the on screen controls. This may surprise you, but the undisputed king regarding display reaction time is still the Google/Samsung Galaxy Nexus (2011).
Audio Output Latency (everything after the Control Input Latency)
The Audio Output Latency is the time from the moment when an application starts to play a sound until you hear it from the speakers. The Audio Output Latency is hardware, operating system and app dependent. A good part of it can be optimized from inside the app (as long as the hardware and operating system allows it). The Audio Output Latency can vary from ~35ms up to over 250ms. Sure, there are apps that report latencies down to 10ms, but this is not the complete thing (more about this later).
Application Latency (everything that happens in the app layer)
“Application Latency” is not an official term. I call it that way that way because it happens in the main application, the audio app. Meant is the time from the moment when an application starts to play a sound (technically when it starts to fill an audio buffer) until it is passed (enqueued) to the underlying audio system (AudioTrack or OpenSLES). This part is under direct control of the audio application. It depends on the defined audio system main buffer size and the app internal buffering.
AudioTrack is the out of the box system, which guarantees to run stable on every Android device.
It is not thought to be used in real time audio applications, but since it’s the one and only ready-to-use system, it is used in most audio apps. AudioTrack has a device dependent minBufferSize which can be obtained by invoking AudioTrack.getMinBufferSize(). In short, AudioTrack has the full control over the minBufferSize as well as over the way the buffers are handled (once a buffer is passed to the AudioTrack system). The lowest ever reported minBufferSize by AudioTrack comes from the Google/Samsung Galaxy Nexus (2011) and corresponds to an application latency of 39ms at a sample rate of 44100Hz. More likely on modern non-Nexus devices are minBufferSizes around 80ms. Using smaller buffers with AudioTrack than the reported minBufferSize usually results in an initialization error.
The native OpenSLES system on the other hand allows more control. The buffer size as well as the way the buffers are handled is under responsibility of the app developer. OpenSLES allows smaller buffers than AudioTrack, of course only as long as a device can handle it. The smallest well working OpenSLES buffer size in the G-Stomper environment corresponds to an application latency of 10ms on Android 5.x and 20ms on Android 4.4 (both with a Nexus 9).
The application latency can be calculated with a simple formula:
= audioTrackByteBufferSize * 1000 / sampleRateHz / bytesPerSample / numChannels
= internalFloatBufferSize * 1000 / sampleRateHz
Now take the max of these two values and have the Application Latency.
On the Android platform, this value can vary from ~10ms up to ~200ms.
Audio System Latency (everything that happens in the system layer)
One of the biggest mistakes regarding output latency is the fact that most apps on report only the Application Latency. This looks of course nice (e.g. Nexus 7 2013/AudioTrack: 40ms), but it is only half the truth.
The moment a buffer is passed to AudioTrack for example does actually only mean that the buffer was enqueued to the AudioTrack internal buffer queue. But you never know exactly how much time will pass before the buffer will actually come out as a sound from the speakers. The time from the moment when a buffer is passed to the audio system until you actually hear it from the speakers, is what I call the “Audio System Latency”.
The Audio System Latency comes in addition to the Application Latency and strongly depends on the audio system internal buffer pipeline (buffer queue, resampling, D/A conversion, etc.). Regarding low latency, this is the most significant part of the latency chain, which reveals the obvious problem of AudioTrack. With AudioTrack, you don’t have any control over its internal buffer pipeline, and there’s no way to force a buffer to pass it more quickly. What you can do is to prepare the buffers as final as possible, e.g. do the resampling in the audio application and pass the buffers always at the systems native sample rate. Unfortunately this does not change the latency, but it avoids glitches due to Android internal resampling.
I’ve measured Audio System Latencies of over two times more than the Application Latency. In other words, if the Application Latency is 80ms, it can easily be that the full output latency is more than 240ms, which is ridiculous for a real time application.
What did Samsung in their Professional Audio SDK to achieve such low latencies?
I’m no scientist, but it’s quite obvious that they reduced the audio pipeline (application to speaker) to a minimum, and they did a very good job with impressive results. Unfortunately the SDK is for Samsung devices only, but it’s for sure a great pioneer work, and maybe it’ll motivate others to catch up. There’s a nice video presentation of the Samsung Professional Audio SDK on YouTube: https://www.youtube.com/watch?v=7r455edqQFM
For (supported) Samsung devices, it’s definitely a good thing to consider the integration of their SDK.
What can you do as an app developer to get a faster audio pipeline?
Go native! Using the native OpenSLES reduces the Audio System Latency significantly. Even if you work with the same buffer size as with AudioTrack, you’ll notice a big difference, especially on newer Android versions.
Using OpenSLES does not implicitly mean “low latency”, but it definitely allows lower latencies than AudioTrack, because all audio buffers are written directly to the audio hardware, without the AudioTrack API and Dalvik/ART runtime overhead. This means the audio pipeline is shorter and therefore faster.
“The Audio Programming Blog” provides good tutorials regarding the OpenSLES integration:
Also helpful is this article on GitHub:
The “Google I/O 2013 - High Performance Audio” presentation gives a good overview about low latency audio on Android in general.
Will the G-Stomper apps get OpenSLES support?
Yes, definitely. Actually, OpenSLES is already integrated as an experimental additional audio system. In the current version 4.0.4, it is exclusively available for Nexus devices. The upcoming version 4.0.5 will introduce the OpenSLES to all 1ghz Quad-Core (or faster) devices running on Android 4.2 (or higher). The default will still be AudioTrack, but users with supported devices will get a notification and will be able to manually switch to OpenSLES in the G-Stomper setup (Setup dialog / Audio / Audio System / OpenSL).
How can the full output latency get measured?
Unfortunately there’s no proper way to automatically calculate the full output latency (Control Input Latency + Application Latency + Audio System Latency) from inside an app. The only way to get real numbers is to measure it.
There’s an article on android.com, which shows a way to measure the full audio output latency (Application Latency + Audio System Latency) in use of an oscilloscope and the device’s LED indicator:
But honestly, by far not everyone has that equipment.
Here’s a simple way to measure full output latency:
The only things you need are a microphone, a PC with a graphical audio editor installed, and an Android device. While recording on the PC, hold the microphone close to the screen, and tap some button or piano key on the Android screen, which is supposed to play in a sound. Be sure to tap the screen hard enough, so that the tap is audible. The microphone will record both, the physical finger tap and the audio output. Then, in the audio editor on the PC, measure the gap between the two peaks (finger tap and audio output).
Be sure to make more than one recording and take the average of the measured times. Especially the display reaction time may vary over multiple taps.
There might also be a quite significant difference between the head phone jack (with an external speaker connected) and the device internal speakers. Using the device internal speakers may result in higher latencies because of the post processing, which is usually done for internal (low quality) speakers.
This is definitely not the most scientific and also not the most precise approach, but it’s precise enough to give you an idea of the real output latency. You’ll be surprised by the results.
Please make a donation to