You can currently attach multiple recordings to a song and trigger them with remote control. However, this would need to transition between clips with no latency, and you would need a way to select clips before they play rather than selecting them at the moment they play.
Using automation tracks, I could add the ability to record a "seek" action as an automation event. The new audio engine in iOS 8 and 9, and the one in Android, allow sub-second seeking (the old audio engine in iOS 5-7 doesn't), but in some brief testing, the audio engines don't respond quickly enough to jump around seamlessly. Also, the current automation track editing interface does not have any kind of graphical waveform display or drag and drop interface that would allow you to split up your audio recording easily.
What BandHelper is, at its core, is an information management tool. It has the ability to view (in the case of lyrics) and play back (in the case of recordings) that information in commonly used ways. But when you start looking at more advanced uses of those functions, the question becomes, should BandHelper include more of the capabilities of a sequencer / mixer / PDF converter / instant messaging client / etc? Or should I leave that to other apps that are designed for those functions? The answer involves a few factors: how easy is it to develop those functions, how many people would use those functions, are other apps available for those functions, how easy is it to integrate my app with those other apps?
For this feature, I would encourage you to look at other apps that can do the audio sequencing you're looking for (Loopy might be a place to start) and report back on what you find.