Right now everyone seems to be moving to character-based speech recognizers and synthesizers. A character-based speech recognizer is an ASR system in which there is no explicit representation of phones, just Unicode codepoints on the output side. Similarly, a character-based synthesizer is a TTS engine without an explicit mapping onto pronunciations, just orthographic inputs. It is generally assumed that the model ought to learn this sort of thing implicitly (and only as needed).
I genuinely don’t understand why this is supposed to be better. Phonemic transcription really does carry more information than orthography, in the vast majority of languages, and making it an explicit target is going to do a better job of guiding the model than hoping the system automatically self-organizes. Neural nets trained for language tasks often have a implicit representation of some linguistically well-defined feature, but they often do better when that feature is made explicit.
My understanding is that end-to-end systems have potential advances over feed-forward systems when information and uncertainty from previous steps can be carried through to help later steps in the pipeline. But that doesn’t seem applicable here. Building these explicit mappings from words to pronunciations and vice versa is not all that hard, and the information used to resolve ambiguity is not particularly local. Cherry-picked examples aside, it is not at all clear that these models can handle locally conditioned pronunciation variants (the article a pronounced uh or aye), homographs (the two pronunciations of bass in English), or highly deficient writing systems (think Perso-Arabic) better than the ordinary pipeline approach. One has to suspect the long tail of these character-based systems are littered with nonsense.