We’ve changed the root view controller to be UINavigationViewController, in order to be able to push view controllers from the initial grocery list screen. Now, let’s extend our storyboard with new screens and some design updates: Here’s a part of the class, it just provides methods for saving and getting the values for the parameters: We will persist the user preferences about the speech parameters between app launches and for this we will create a new class – SettingsManager. It also provides methods to pause, play and stop the speech, which might be useful if you want to develop an audio books app. There’s a method that checks whether the synthesizer is currently speaking and if it does, it just adds the next utterances to the queue. The speech synthesizer keeps a queue (FIFO data structure) of utterances to be spoken. It keeps information about the text to be spoken and parameters that can customize the voice, pitch, rate, and delay of the spoken text. An AVSpeechUtterance is the basic part of speech synthesis. In order for the class to speak your text, you need to provide it an object of type AVSpeechUtterance. Private var audioSession = AVAudioSession.sharedInstance() Private var speechSynthesizer = AVSpeechSynthesizer() There’s also a new variable for the audioSession, since we will use that one in more methods and we don’t want to always call the sharedInstance class method: Let’s first create an object from the class we need. As the Apple docs tell us, this class produces synthesized speech from text on an iOS device, and provides methods for controlling or monitoring the progress of ongoing speech – which is exactly what we need, so let’s get started! In order to accomplish this, we will need a different class ( AVSpeechSynthesizer) from a different framework (AVFoundation). We will also provide a way to customize the voice that will do the speaking, through a settings page. We will extend the grocery list app from the previous post (make sure to check that one out first), by adding a functionality to tell the user what remaining products they need to buy from the list. In this post, we will see the opposite – how the device can communicate an information we have as a string in our app, with speech. We’ve seen in the previous post how an iOS device can understand and transcript the voice commands we give to it (speech to text).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |