AudioAnalysisV6 with new classifiers, deprecating
InDepthAnalysis in favor of
LibraryTrack, new guides and a new Webhook format.
We worked hard over the last few months to make the new classifier generation available via our API.
Previously all prior classifier generations were unified under the
Upgrading the classifier meant losing access to the previous classifier generation data.
Because we do not want to force anyone immediately to upgrade to the latest generation, we now introduce each new generation as separate GraphQL fields.
Our new mood multi-label classifier predicts 13 different moods:
Our new genre multi-label classifier predicts 17 different genres:
In addition our multi-label classifier predicts 8 different edm sub genres in case a track has been categorized as
We are also proud of our musical era classifier, which predicts the most probable musical era of production of a track.
These are some of our highlights for the new classifier generation, all the other classifiers have also improved!
AudioAnalysisV6 is now automatically enqueued with every new library track. For older tracks we recommend contacting our sales team or re-enqueue them manually via the API.
Check out the
AudioAnalysisV6 Classifier documentation.
We also have a new up to date auto-generated GraphQL documentation available. Check it out
We are deprecating the term InDepthAnalysis. We now recommend using the
Query.libraryTracks fields for fetching data.
The IDs of the
InDepthAnalysis.id fields are backwards compatible with the
Mutation.libraryTrackEnqueue should be used for creating new library track records to the library.
All the analysis/classifiers results are exposed on the
LibraryTrack GraphQL type. We updated our library track query builder for easier exploration of our API.
Any new library track analysis triggered via
Mutation.libraryTrackEnqueue will result in opting into the new V2 Webhook Payload format.
The new webhook format is more flexible and consistent. Learn more here