@Dominik (sorry for misspelling your name earlier)
A proper accessibility audit often costs as much as the website development so companies just rely on the word of developers that they deliver ‘compliant’ (NOT accessible) websites. Most of these developers are absolutely clueless and are essentially lying but their clients don’t have the knowledge to identify the issues.
I agree that this is a real problem, but I don’t think the screen reader detection API is the right solution for your problem. Better developer and auditing tools may be.
I do think exposing screen reader preferences (or other AT-interoperability preferences) can be useful, even necessary, but most of the use cases are in large scale document editing suites. For example, in web-based spreadsheet apps, there is a real computational and memory cost for inserting extra attributes on tens of thousands or hundreds of thousands of elements. Google developers specifically requested this because adding accessibility support to Google Docs caused a measurable performance regression for all users due to the sheer number of elements that were being modified.
Another example is that, due to the severe inadequacies of contenteditable implementations today, many advanced web-based document editing suites use custom rendering views and need to make them accessible using scripted live region announcements. There is currently no way for these web applications to adequately respect a screen reader user’s settings for things like typing echo, verbosity, etc, so the interface of extremely complex web applications will always feel foreign to a screen reader user until some of these preferences can be shared with trusted web applications.
The specification should probably include some more detailed examples like these.