MUSICAL ANDROID
  • BLOG
  • INTERVIEWS
  • ARTICLES
  • DOWNLOADS
  • Contact

How Music Streaming Apps Are Driving The Industry Forward

6/15/2016

1 Comment

 

​They say that music is infectious but in order for anything to spread, it needs a vehicle to drive it forward. In the past, music producers, record companies and artists had to rely on the radio and then TV music stations to get their songs heard by people who would then, hopefully, purchase a record.

​However, fast-forward a few decades and things have changed dramatically. Online downloads, YouTube and streaming apps have made it easier and more efficient for artists and producers like Supreme Wilder to get their voices heard. Not only that, but it's also become markedly easier for people to share their favorite tunes.

In fact, according to the latest stats from IFPI, 2014 saw digital music channels generate the same revenue as physical sales (46% each). And, thanks to a 6.9% increase in online activity, the digital music industry was worth an impressive $6.85 billion in 2014.
 
Playlists Helping People Share Music
​
Picture
One of the main outlets helping to bump up the number of digital downloads in recent times is the aforementioned streaming app. Today, an Android user can download any number of apps that will bring them crystal clear audio from musicians and bands past and present.

Spotify is arguably the market leader with Manhattan Venture Research valuing the company at $5.74 billion in 2015. That valuation makes Spotify almost twice as valuable as its closest streaming rival, Pandora ($3.2 billion). One of the reasons for Spotify's huge growth since it launched in 2006 is its sharing functionality.

​Instead of users simply listening to their favorite songs in isolation, Spotify allows people to share the latest tracks they're enjoying via their social media accounts and public playlists. These days, various companies and celebrities publish their Spotify playlists to the general public. From comedians like Jimmy Fallon to poker players such as Vanessa Selbst, and even musicians like Rihanna, everyone who is anyone now share their playlists on Spotify.

A Desire To Excel Through Musical Inspiration

Picture
Aside from giving us a glimpse into the life of a celebrity, these playlists also have something of an aspirational element to them. Indeed, when someone successful tells the general public that this is what they listen to when they're achieving greatness, people are more likely to listen. This phenomenon becomes obvious when you look at someone like poker pro Vanessa Selbst. Because Selbst is one of the top poker players in the world with more than $11 million in tournament earnings, people want to know how she gets an edge at the felt.

Given that poker is an extremely cerebral game where every decision counts, it's little wonder that the PokerStars-sponsored pro uses music to improve her performance. According to her PokerStars playlist, Selbst draws her fierce attitude from tracks like Big Jet Plane by Angus & Julia Stone and Hospital Beds by Cold War Kids. At the other end of the spectrum, Selbst's fellow pro Andre Akkari prefers to chill out to the likes of Zeca Pagodinho's Deixa A Vida Me Levar when he's playing online.

​Elsewhere, the culture of gym tracks represents a similar phenomenon. Over the last decade, people who are looking to get in shape have increasingly been turning to highly toned specimens on Instagram to find out how they exercise and what tunes they listen to when they're working out. According to Beats Music, the app by Beats by Dre, Britney Spears likes to work out to Madonna and Usher, while personal trainer Tony Horton’s Spotify playlist includes the Crystal Method and the Decembrists. Naturally, Spotify has become the medium of choice for sharing motivational gym music.

Aspiring Artists Reaching For The Clouds
​
Picture
If Spotify has become the place to listen and share the music of established artists, then SoundCloud is the place for emerging talent. Based in Berlin, Germany but founded in Sweden in 2007, SoundCloud enables users to record, upload, share and promote their music with more than 175 million monthly users.

Let's say you've solved the problem of output latency and created a killer Android audio track. In order to post your track to your social media accounts and generally give it some exposure, you could add your few minutes' worth of material to the 12 hours of content added to SoundCloud every minute.

​And SoundCloud isn't just for people who have used an Android app like Syntheogen to make some simple drum beats. Over the last decade, a number of artists have actually launched successful careers after making it big on SoundCloud. Indeed, because the streaming platform gives an artist the ability to reach the masses without the help of a record label, performers such as the Bay Area's Kehlani and PartyNextDoor have become viral stars thanks to SoundCloud.

Discovery Broadens Horizons And Drives Sales
​
Picture
Another interesting Android app that's brought music to the masses thanks to its streaming capabilities is Shazam. Although not technically a streaming service, Shazam has helped boost the industry's popularity through its discovery capabilities. As we noted at the top of this article, the reason for the growth of the online music industry over the past decade has been the availability of information.

Spotify has allowed us to see what other people are listening to, SoundCloud has given emerging artists a platform to promote themselves and Shazam has given people the ability to discover new sounds. Originally founded in 1999, Shazam now has 100 million monthly users and more than 2 billion followers who all want to learn more about the song they just heard.

​By using an Android mobile's microphone, Shazam creates a digital fingerprint for a snippet of audio. It then compares this fingerprint to its database and not only tells the user the song title and artist, but it provides them with a link to iTunes, Spotify and beyond. Essentially, what Shazam is doing is twofold. On the one hand it's broadening users' musical horizons and on the other it's helping to increase digital music sales.

A New Era For The Music Industry
​
Picture
In fact, that appears to be the raison d'être of the current generation of streaming apps. On top of giving users the ability to listen to music via their Android devices, apps like Spotify, SoundCloud and Shazam are opening up the doors of the music world and, in a sense, making it more social. In line with the culture of sharing images and statuses via Facebook, music is now an experience to be shared and this, in a nutshell, is the reason why digital downloads are up and the industry is breaking boundaries.
1 Comment

Audio Output Latency on Android, by planet-h.com

2/5/2015

0 Comments

 
Picture
This is the first article in a series where developers talk about different issues regarding making applications for Android. To start with andreas the developer of G-Stomper Studio, VA-Beast and G-Stomper Rhythm will talk about latency as this is something that concerns many music makers that use Android.
If you are not a programmer do not be scared to read it as it is also understandable for non-programmers!

So thank you so much Andreas for taking the time to write and share with us all!
_______________________________________________________________________________________________________
Latency in audio applications is probably one of the most discussed and also one of the most annoying issues on the Android platform. Understanding and handling latency the right way can be a mighty jungle, especially if you’re a “normal” developer, and not a scientist. 

This article is focused on output latency on Android devices, not input or round-trip latency. Hopefully someday I’ll be able to write about input latency as well, but so far input and round-trip was no issue in my applications. So the term latency in this article is always meant as output latency. Also please forgive me when I forget some scientific details. It is neither my goal nor am I able to write a scientific paper about latency on Android. What you read is my personal experience with the different aspects of output latency on the Android platform.

Output Latency, what is it?

In short, output latency is the time from the moment you press a button or a piano key until you hear the sound from the speakers. And, output latency in audio applications is something we all want to get rid of.

The complete output latency in a musical application, which includes live playing, is a combination of the following 3 main factors:

1. Control Input Latency (e.g. display reaction time)
2. Application Latency (everything that happens in the app layer)
3. Audio System Latency (everything that happens in the system layer)


Control Input Latency (e.g. display reaction time)

The Control Input latency is the time from the moment you touch the screen (or an external MIDI Keyboard) until the audio system gets notified by the Android OS to do something. It is influenced by various factors, which strongly depend on your device and Android version. It can vary from a few milliseconds up to 300ms or even more. The Control Input Latency is under full control of the Android OS and the underlying hardware. There’s no way to optimize or measure it from inside an app. But can get rid of a good part of it by using a MIDI controller/keyboard. The reaction time of an external MIDI keyboard is usually around 30-40ms faster than the on screen controls. This may surprise you, but the undisputed king regarding display reaction time is still the Google/Samsung Galaxy Nexus (2011).

Audio Output Latency (everything after the Control Input Latency)

The Audio Output Latency is the time from the moment when an application starts to play a sound until you hear it from the speakers. The Audio Output Latency is hardware, operating system and app dependent. A good part of it can be optimized from inside the app (as long as the hardware and operating system allows it). The Audio Output Latency can vary from ~35ms up to over 250ms. Sure, there are apps that report latencies down to 10ms, but this is not the complete thing (more about this later).

Application Latency (everything that happens in the app layer)

“Application Latency” is not an official term. I call it that way that way because it happens in the main application, the audio app. Meant is the time from the moment when an application starts to play a sound (technically when it starts to fill an audio buffer) until it is passed (enqueued) to the underlying audio system (AudioTrack or OpenSLES). This part is under direct control of the audio application. It depends on the defined audio system main buffer size and the app internal buffering. 

AudioTrack is the out of the box system, which guarantees to run stable on every Android device.
It is not thought to be used in real time audio applications, but since it’s the one and only ready-to-use system, it is used in most audio apps. AudioTrack has a device dependent minBufferSize which can be obtained by invoking AudioTrack.getMinBufferSize(). In short, AudioTrack has the full control over the minBufferSize as well as over the way the buffers are handled (once a buffer is passed to the AudioTrack system). The lowest ever reported minBufferSize by AudioTrack comes from the Google/Samsung Galaxy Nexus (2011) and corresponds to an application latency of 39ms at a sample rate of 44100Hz. More likely on modern non-Nexus devices are minBufferSizes around 80ms. Using smaller buffers with AudioTrack than the reported minBufferSize usually results in an initialization error.

The native OpenSLES system on the other hand allows more control. The buffer size as well as the way the buffers are handled is under responsibility of the app developer. OpenSLES allows smaller buffers than AudioTrack, of course only as long as a device can handle it. The smallest well working OpenSLES buffer size in the G-Stomper environment corresponds to an application latency of 10ms on Android 5.x and 20ms on Android 4.4 (both with a Nexus 9).

The application latency can be calculated with a simple formula:

AUDIO_SYSTEM_MAINBUFFER_LATENCY_MS
 = audioTrackByteBufferSize * 1000 / sampleRateHz / bytesPerSample / numChannels

APP_INTERNAL_BUFFER_LATENCY_MS
= internalFloatBufferSize * 1000 / sampleRateHz

Now take the max of these two values and have the Application Latency. 
On the Android platform, this value can vary from ~10ms up to ~200ms.

Audio System Latency (everything that happens in the system layer)

One of the biggest mistakes regarding output latency is the fact that most apps on report only the Application Latency. This looks of course nice (e.g. Nexus 7 2013/AudioTrack: 40ms), but it is only half the truth. 

The moment a buffer is passed to AudioTrack for example does actually only mean that the buffer was enqueued to the AudioTrack internal buffer queue. But you never know exactly how much time will pass before the buffer will actually come out as a sound from the speakers. The time from the moment when a buffer is passed to the audio system until you actually hear it from the speakers, is what I call the “Audio System Latency”. 

The Audio System Latency comes in addition to the Application Latency and strongly depends on the audio system internal buffer pipeline (buffer queue, resampling, D/A conversion, etc.). Regarding low latency, this is the most significant part of the latency chain, which reveals the obvious problem of AudioTrack. With AudioTrack, you don’t have any control over its internal buffer pipeline, and there’s no way to force a buffer to pass it more quickly. What you can do is to prepare the buffers as final as possible, e.g. do the resampling in the audio application and pass the buffers always at the systems native sample rate. Unfortunately this does not change the latency, but it avoids glitches due to Android internal resampling.

I’ve measured Audio System Latencies of over two times more than the Application Latency. In other words, if the Application Latency is 80ms, it can easily be that the full output latency is more than 240ms, which is ridiculous for a real time application.

What did Samsung in their Professional Audio SDK to achieve such low latencies?

I’m no scientist, but it’s quite obvious that they reduced the audio pipeline (application to speaker) to a minimum, and they did a very good job with impressive results. Unfortunately the SDK is for Samsung devices only, but it’s for sure a great pioneer work, and maybe it’ll motivate others to catch up. There’s a nice video presentation of the Samsung Professional Audio SDK on YouTube: https://www.youtube.com/watch?v=7r455edqQFM

For (supported) Samsung devices, it’s definitely a good thing to consider the integration of their SDK.

What can you do as an app developer to get a faster audio pipeline?

Go native! Using the native OpenSLES reduces the Audio System Latency significantly. Even if you work with the same buffer size as with AudioTrack, you’ll notice a big difference, especially on newer Android versions.

Using OpenSLES does not implicitly mean “low latency”, but it definitely allows lower latencies than AudioTrack, because all audio buffers are written directly to the audio hardware, without the AudioTrack API and Dalvik/ART runtime overhead. This means the audio pipeline is shorter and therefore faster. 

“The Audio Programming Blog” provides good tutorials regarding the OpenSLES integration:
https://audioprograming.wordpress.com/2012/03/03/android-audio-streaming-with-opensl-es-and-the-ndk/
https://audioprograming.wordpress.com/2012/10/29/lock-free-audio-io-with-opensl-es-on-android/

Also helpful is this article on GitHub: 
https://github.com/igorski/MWEngine/wiki/Understanding-Android-audio-towards-achieving-low-latency-response

The “Google I/O 2013 - High Performance Audio” presentation gives a good overview about low latency audio on Android in general.
https://www.youtube.com/watch?v=d3kfEeMZ65c

Will the G-Stomper apps get OpenSLES support?

Yes, definitely. Actually, OpenSLES is already integrated as an experimental additional audio system. In the current version 4.0.4, it is exclusively available for Nexus devices. The upcoming version 4.0.5 will introduce the OpenSLES to all 1ghz Quad-Core (or faster) devices running on Android 4.2 (or higher). The default will still be AudioTrack, but users with supported devices will get a notification and will be able to manually switch to OpenSLES in the G-Stomper setup (Setup dialog / Audio / Audio System / OpenSL).

How can the full output latency get measured?

Unfortunately there’s no proper way to automatically calculate the full output latency (Control Input Latency + Application Latency + Audio System Latency) from inside an app. The only way to get real numbers is to measure it.

There’s an article on android.com, which shows a way to measure the full audio output latency (Application Latency + Audio System Latency) in use of an oscilloscope and the device’s LED indicator:
https://source.android.com/devices/audio/testing_circuit.html.
But honestly, by far not everyone has that equipment.

Here’s a simple way to measure full output latency:

The only things you need are a microphone, a PC with a graphical audio editor installed, and an Android device. While recording on the PC, hold the microphone close to the screen, and tap some button or piano key on the Android screen, which is supposed to play in a sound. Be sure to tap the screen hard enough, so that the tap is audible. The microphone will record both, the physical finger tap and the audio output. Then, in the audio editor on the PC, measure the gap between the two peaks (finger tap and audio output). 

Be sure to make more than one recording and take the average of the measured times. Especially the display reaction time may vary over multiple taps. 

There might also be a quite significant difference between the head phone jack (with an external speaker connected) and the device internal speakers. Using the device internal speakers may result in higher latencies because of the post processing, which is usually done for internal (low quality) speakers.

This is definitely not the most scientific and also not the most precise approach, but it’s precise enough to give you an idea of the real output latency. You’ll be surprised by the results.
0 Comments

D.N.P writes about using Caustic 3 for his latest release

2/5/2015

 
Picture
This have been in the vaults for about two weeks or so as I wanted it to be posted unblemished by other news etc and hang out on the opening page for some days as it is an album that to my ears is very good. Wrote about the release before and asked the artist to write some words about it as it was made entirely in Caustic 3.
It is a dark and ambient work but in difference to much other similar fare it does not get on my nerves with endless meanderings into meaningless sound excercises or self important bombastic reflections on the dark side of existence.
In either case it is for free and can be downloaded from the Internet Archive (which is a good idea for anyone that do not want to compromise their releases with the stigma of supporting commercial enterprises and all it curtails...) it comes in all formats that you could desire and is for free.

Here is the article as written by D.N.P:
________________________________________________________________________________________________

I never was much one for complex melodies, it reminded me of bloated 1970s rock and the worst kind of prog. I always preferred atmospheres and repetition, why skim over notes when you can let them breathe. My first exposure to dark music outside of metal and its doomy subgenres came in the form of the album 'Nostromo' by Sleep Research Facility. 
http://www.youtube.com/watch?v=Pu54sTN_yPU&list=PLYhlnOOeWH7Mh_KmCb_mk941eRyh5AOrb 
I've never heard anything quite like it and it really did trigger off something inside me. 

On entering the world of Android, I stumbled upon 'Caustic 2' and immediately started to experiment. Since the huge improvements made in the Caustic 3 upgrade, creating ' dark ambient ' is clearly possible despite maybe it being more suited to dance/techno styles. By stretching the bars to 8x and turning the project tempo right down you can allow the desired sounds to develop. 

The PCM Synth has terrific versatility and by downtuning samples and cranking the attack, sustain and release knobs you can create unexpected noises and form a seamless drone that can pulsate, flow and ebb. Also you have to love how complex the Subsynth is too. I'm particularly keen on very slow, very deep sine wave oscillations that the subsynth allows. 
Taking full advantage of the automation facilities, tweaking the filters can add new dimensions to the overall piece.

The modular synth unit is a complete mixed box for me, always have a fumble through it and often end up with some very unexpected results. You have to have some suprises! The FM synth can be amazing, just by clicking a button or changing the sequence of the synths can change one sound into another. Again experimentation is key to success with this beauty. 

You can create some very ethereal feels using the Organ unit although I'm yet to fully take advantage of this....and I've not even tried the KS synth yet. 

There's so much that lends Caustic 3 to dark ambient noise music....even the C-SFXR 'Easter egg' can create an eerie dark sound when downtuned and fed through the  numerous effects available. 
As for sound packs, I own pretty much all of them and have recorded quite a few samples of my own. You can't have enough 'raw material' at your disposal and the Single Cell Software site is continually updated with new presets. 

I believe it would be nigh on impossible to find a better app than Caustic 3 to create sounds to scare your neighbours and close family. Go on, go horror....

Cheers

Chris D.N.P
__________________________________________________________________________________________________

Here is a review published:
http://cerebralrift.org/2014/12/08/rotting-away-grant-us-peace/

To download the album:
https://archive.org/details/petroglyph270D.N.P-RottingAway

Syntheogen- Article by the developer Jeremy Neal Kelly

9/18/2013

2 Comments

 
Picture
I am proud to present this article written by the developer regarding his application Syntheogen which is the latest of the greatest. 
As this is a Application that is still in BETA it is interesting to read where it comes from and where it is going and what was going on in his mind developing the application Syntheogen. 
This article will be interesting for a lot of people for different reasons and am very grateful that Jeremy took the time to write this for all of us to read...

(He still calls it BETA but have not encountered any bugs and think that it is more a question of how and what extra functions will be added...)
Picture
Syntheogen Article for Musical Android

Jeremy Neal Kelly


DESIGN

Syntheogen ultimately was inspired by a second-hand drum machine I bought some twenty years ago. The machine was a Yamaha RY-10; it had a row of sixteen tiny buttons in the middle, each with a red LED above, and it was the first step sequencer I had used.

Though I love hearing music, I've rarely enjoyed making it; practice is a bore, synthesizer interfaces are maddening, and I always seem to lack the one cable I need to record my amazing riff. Step sequencers are the exception to that rule. They are fun, easy, and immediate; the step sequencer is the only interface I've seen where pressing buttons at random is actually a great idea.

They do have limitations, however. The hardware sequencers I've used offer only one row of buttons, so you cannot view or edit multiple tracks at once. Work on the 'vertical' axis — whether pitch, volume, or whatever — is neither convenient nor enjoyable, especially when bending notes. Worst of all, traditional step sequencers perform poorly outside of quadruple time, since you cannot use odd-numbered meters without losing some of your buttons. I started to make a dub track on my EMX-1, but I had to give up because triplets were such a pain.

Syntheogen is an attempt to escape those limitations. Despite the popularity of skeuomorphism, I say that software can and should transcend the limitations of hardware. In software, we can have as many buttons as we want, and those buttons can even move or change shape. At the beginning of this project, I knew I wanted a two-dimensional array of steps that would allow all tracks to be viewed and edited together. Pitched tracks would occupy multiple rows, allowing melodies and chords to be entered straightforwardly. I wanted an easy way to bend notes or entire chords. I wanted a way to divide the grid into different lengths, so that triplets and unusual time signatures could be used. I also wanted to have patterns with different lengths in the same grid, so that polyrhythms could be programmed easily. These are all things I had tried to do with hardware sequencers, but found to be difficult or impossible.

The result of all this was the Syntheogen LOOP STEPS dialog:
Picture
Where sequencing is concerned, Syntheogen offers some advanced features that (as far as I know) other Android apps lack; on the other hand, certain 'standard' features like controller automation aren't implemented at all. In some cases, I haven't had time to develop what I want, but so far I've excluded automation intentionally for reasons that relate to my design philosophy for this project.

Generally, I don't like the way automation is implemented in the applications and hardware I've used. The typical approach — where the user places the device in an automation record mode, then manipulates the control in real time — does not satisfy me. Usually there is no way to view the data without playing back and watching the controller, and no way to edit it without recording again. This design hides automation data the same way one-line step sequencers hide pitch and volume data, and that's not what Syntheogen is about.

Unfortunately, I haven't found a better way to implement this. In my work developing 'line of business' software, elegance is not required or typically even noticed. If a feature is requested, I must implement it, one way or another, even if the result is a bit awkward.

But this is a different type of project. I'm not a professional musician, and I don't expect Syntheogen to be used by many who are. My users and I make music for fun, so I think Syntheogen itself should be fun. Therefore, in this app, I would rather omit a feature if I cannot implement it in a fun and direct way. Not everything in Syntheogen meets this standard right now, but that is my goal. This approach will limit the app, in a sense, but I would rather do a few things well than do many things poorly. So, if I find a good way to implement automation, or any other feature, I will add it; otherwise, I plan to stick to the things I can do well.

DEVELOPMENT

Syntheogen was written in C++, which I know and like better than any other language. C++ supports object-oriented development, which is important for larger projects, yet it lets the developer work very close to the machine when performance is important. It also supports a resource-management strategy called RAII (Resource Acquisition Is Initialization) that is better and more flexible than the garbage collection offered by Java.

Unfortunately, Android is very much a Java platform, and does not give first-class status to C++ apps. Parts of the platform are represented in the Android NDK (Native Development Kit) with 'native' libraries that provide direct access to Android features from C or C++ code. Most platform features cannot be accessed this way, however; they can only be reached by passing through a layer called JNI (Java Native Interface) that 'translates' function calls to and from the format used by Java. JNI is difficult to use, and somewhat dangerous, as even small mistakes can crash your application. For this reason, many native developers use JNI — and by extension, much of Android — as little as possible.

This issue required that I implement my own window-management and UI control library, since using Android's controls would have required hundreds of JNI calls between the UI and the synthesis engine. Developing a full-featured UI framework is a big task, but it's something I've done before, and by relying on Android as little as possible, I was able to make Syntheogen largely platform-independent. In fact, Syntheogen was mostly developed in Windows, with Visual Studio. Even aside from the UI framework, this created a lot of extra work, but I've developed mobile and embedded applications this way for many years, and it's always turned out to be worth it. In this case, it allowed me to do most of my debugging in Visual Studio, which is fortunate, as the Android NDK debugger is almost unusable.

One regrettable early decision was to use version 1.1 of OpenGL ES rather than version 2.0. Version 1.1 is simpler, and I had used it before, but to do any serious work with OpenGL you really have to use shaders, an advanced technique provided with version 2.0 for filling shapes with images or patterns. Having chosen version 1.1, I was forced to use stencils when clipping patterns to round corners, and that is a poorly-documented, convoluted, and slow solution to an otherwise simple problem.

Another questionable decision was to make the UI layout completely independent of the display aspect ratio, to the extent that black bars are not displayed, yet the images used to render controls are never distorted horizontally or vertically by stretching. I wanted to use as much of the display area as possible, and to render all straight lines with pixel-perfect sharpness, but this complicated the way controls are laid-out and rendered while simultaneously limiting the sort of patterns and gradients I could use to decorate them. A few years ago, when it was possible to see the pixels on an average display, this might have made a noticeable difference, but today it is worthless unless you're using a jeweler’s loupe. I will have to replace a lot of the rendering code before I can improve the application's appearance much further.

SYNTHESIS

The sequencing and synthesis engine presented numerous challenges.

Sequencing is much harder than it looks; in Syntheogen, patterns repeat within loops, loops repeat within songs, and a particular step may be tied on one or both sides to other steps, even steps in other pattern iterations. Simply determining what steps will play in a given span is very difficult, and I'm surprised sometimes that it works at all.

Syntheogen is my first audio application, and I developed the synthesis engine from scratch, so there was a lot of theory to be learned. I studied Dodge and Jerse's 'Computer Music', and I read much of Curtis Roads' 'The Computer Music Tutorial'. The Dodge and Jerse book was useful, but it contains serious gaps that a working synthesis developer must fill elsewhere. It also contains frustrating math errors that were especially aggravating given the book's ridiculous price. The Roads book is very popular, but I found it only occasionally worthwhile. Though it's very large, the book's coverage is surprisingly superficial, and I can't forgive an author who wastes my shelf space with a full-page photograph showing me what a compact disc looks like. I had to go to the KVR Audio forums to find a good general-purpose filter, and to learn more about reverberation algorithms.

There's a rule that warns software developers not to optimize any length of code unless they know with certainty that a bottleneck exists there. Normally I'm careful to honor this rule, but I rarely felt I had that luxury when developing Syntheogen. In most applications, the processor spends much of its time idling while waiting for user input. In a synthesis app, during playback, the processor receives a constant stream of lengthy tasks, and it can idle only if it finishes a given block before receiving the next. My constant worry was that, if I did not optimize everything in the synthesis path, I would end up with an app that perhaps contained no bottlenecks, but was instead everywhere too slow. In a sense, the entire synthesis engine constitutes a bottleneck, simply because of the way it is used. Optimizing so liberally created a lot of code that was difficult to write and remains difficult to read. I've been pleased with Syntheogen's performance, though, so I think that was the right approach.

PLANS

I'm generally happy with how Syntheogen has turned out. It's interesting to compose a song with a synthesizer you wrote yourself, and to know in detail that everything you hear is the output of some relatively simple math operations.

There is still a great deal of work to be done, though. Some obvious omissions, like chorus and phase effects, must be remedied. Advanced users will want sample editing and MIDI export capabilities. None of these things are especially difficult, just time-consuming.

Now that I've used Syntheogen awhile, I find some tasks to be a bit awkward. When setting up a new loop, the user must create many loop elements and tracks, and I would like a way to automate that. Also, when setting synthesis parameters, there is no way to hear your changes without playing a pattern, which you may not have yet. I would like some way to audition the patch from the TRACK SYNTH and TRACK EFFECTS dialogs, but I'm not sure I want to sacrifice the display area needed for even a small keyboard. These and other issues will be addressed, and naturally, I'll be raising the price as I do so.

I could attempt to produce a full-featured DAW, but I don't think that's a realistic goal for mobile devices, and it's also not what I want to use. What I really want is something like a harmonica. The harmonica can't do everything, but what it does do, it does better than anything else, and it does it in a way that it is compact, durable, inexpensive, easy, and fun. Hopefully, by narrowing my focus, I can approach that standard with Syntheogen.

---------------------------------------------------------------------------------------------------------------------------------------------------

So what more is there to say than support the developer!!!
The application is good already and will be better- 
Have functions that does not exist in other multi track applications and sound wise it comes with quality and implementations you won't find in other Android applications- 
All for the price of a coffee.

Playstore link:
Syntheogen

Homepage:
http://www.syntheogen.com/about.html

PDF file of the article:
syntheogen_article.pdf
File Size: 311 kb
File Type: pdf
Download File

2 Comments

    Archives

    June 2016
    February 2015
    September 2013

    Categories

    All
    App Development
    Caustic 3
    Developer Talk
    Musician Talks
    Music Streaming Apps
    Planet-h.com
    Programming
    Syntheogen

Powered by Create your own unique website with customizable templates.