@srihari: i'm way late on this, but your talk is excellent! i especially enjoyed seeing the pitch graphs for your singing vs. "the machine"
the data mining aspect is interesting, as well
it definitely seems like procedurally generated carnatic music is the next step
you mentioned that you're working on an Open Carnatic Music Database, with gamakams stored in the database according to the abstractions you've modeled -- something that might be interesting to explore is, being able to query over that data and react in real time to input pitches
that could be the beginnings of a sort of carnatic music AI :simple_smile:
i'm imagining some sort of program where it listens to the human musician playing/singing a phrase, it queries the database and tries to find the most similar gamakam, and it generates a phrase in response using a related gamakam