"Taking the WP7 70-599 exam was a bit tricky due to there being no set material. But @silverlightshow has a good guide on what to learn, which I followed myself.
"
Matt Cavanagh, http://roguecode.co.za
This is the third part of my article series to prepare you for Microsoft's new Windows Phone 7 exam which will be available starting July 14th. For a short introduction and overview of the exam as well as a list of general learning materials please take a look at the first article. The official outline of the measured skills breaks down the topics into the following parts:
- Designing Data Access Strategies (19%) (this article)
- Designing and Implementing Notification Strategies (17%)
- Working with Platform APIs, Tasks, and Choosers (21%) (this article)
- Designing the Application Architecture (21%)
- Designing the User Interface and User Experience (22%)
The last article was all about Push Notifications and hence needed to take the architecture of the whole Windows Phone eco system into consideration, including your own Web Services that back your applications, and Microsoft's Push Notification Service. In this part, we'll concentrate entirely on the phone and learn about some of the features the platform offers for your application.
A note on the MSDN documentation: all online resources in the MSDN library for the Windows Phone documentation are now the pre-release documents for Windows Phone OS 7.1 ("Mango"). The original documentation for Windows Phone OS 7.0 (which the exam is likely based on) can be downloaded for offline use here (CHM format). Often there's little difference or it is likely that the changes have no effect on the exam; however in some cases this might not be the case or there are options available in "Mango" that are not part of the RTM release, so you should carefully compare the different versions, and if in doubt, base your learning on the 7.0 documentation.
Working with Platform APIs, Tasks, and Choosers
This part of the exam is actually a mixed bag of topics that deals with data input using sensors and the touch screen, interaction with the platform in the form of tasks and choosers, and finally a subject on your application's design and the navigational concepts on the phone.
Design and implement sensor interaction
"This objective may include but is not limited to: choose which sensors are appropriate for your application; design location awareness (when to use different levels of GeopositionAccuracy); location awareness system setting"
The Windows Phone platform supports a variety of sensors, but not all of them are available to third party developers in the RTM release. In particular, the compass and gyroscope will only be available in the future Mango update, and the same is true for a new abstraction of these low-level sensor APIs in the form of the Motion class. The focus regarding sensors for the exam therefore is the accelerometer and tracking the device location. By the way, a lot of people seem confused about what the difference between an accelerometer and a gyroscope is, and why you need both (actually you additionally also need the compass or GPS) to keep track of the device's orientation. If you want to get a short and clear explanation on this topic, you can read this.
The accelerometer is a sensor that measures acceleration forces that affect the phone. One example for such a force is Earth's gravity of course, but any kind of movement of the phone will be reflected in the values you receive from that sensor too. Using the accelerometer is easy: the corresponding class provides an event you can handle that is raised when new data is available, has a state property to get more information about the sensor (including if it is available at all, which should be the case for all first generation devices), and Start/Stop methods that allow more control of the data reading.
When people talk about location awareness and the Location API of Windows Phone, they often identify this with GPS. However, it is important to know that this is not true, and the default settings make use of additional data like Wi-Fi and cellular radio to obtain (less accurate) location information (see GeoPositionAccuracy). You can force the higher accuracy of GPS if your application needs it though. Just like the accelerometer, the class for the location service (GeoCoordinateWatcher) also allows to start and stop the data reading and to get more information on the status of the service, and once again reading data works event-based. Additionally you can set a threshold parameter to compensate for signal noise and optimize power consumption.
- MSDN: How to Get Data from the Accelerometer Sensor for Windows Phone
- MSDN: How to Get Data from the Location Service for Windows Phone
- MSDN: Location Programming Best Practices for Windows Phone – Makes recommendations about the right level of accuracy and the movement threshold, and how that affects power consumption.
- Code samples for Windows Phone: Level Starter Kit and Location Service Sample – The level sample application makes use of the accelerometer to build a level for the phone; the second sample shows how to read geographic coordinates from the Location Service.
- App Hub: Location Services – Another sophisticated example that shows the use of GPS data, including a video walkthrough, trips and tricks and sample source code for download.
- Andrea Boschin: Using Sensors – An article on both the Accelerometer and GPS.
Plan for and implement the use of Tasks and Choosers
Tasks are Windows Phone's way for developers to interact with certain features and built-in applications of the platform. Instead of implementing recurring tasks like selecting an email address from the contacts or taking a picture with the camera again and again in each application, Windows Phone offers a set of pre-built APIs for this that launch the corresponding system applications and/or dialogs. The advantages of this approach are that you as a developer have less work (you can use that already available functionality), the user experience is the same across all applications (two different applications will present the same UI for the same task), and the platform benefits from an increased level of security, because that level of indirection does not give third party applications direct access to certain resources on the phone. A drawback of course is that you are limited to the features the APIs offer, and some types of applications might not be possible to build at all. Luckily, the number of features and tasks is increased in future versions like the upcoming Mango update.
Windows Phone's tasks are divided into two types: Launchers and Choosers. Launchers are used to, well, launch one of the built-in applications in a fire-and-forget manner. You can usually set some initial parameters (for example a recipient for an SMS compose task), but you do not receive any result or return values once the task has finished. Choosers on the other hand are tasks that return something to the calling application, for example the photo the user picked in the case of a photo chooser task. The application can then work with that result (or handle the case when the user did not choose anything).
Using a task is pretty simple: create it, set some of the parameters if applicable, and call its Show method, which in the case of choosers is a base method available for all these tasks. The launchers also all have a Show method, but it's not inherited from a base class. For choosers there are additional requirements: you need to make sure it is scoped globally at the page level, and construct it as well as hook the event that delivers the results in the page constructor.
The following list contains more resources, tips and samples for using tasks:
- MSDN: Launchers and Choosers Overview for Windows Phone
- MSDN: How to Use Launchers for Windows Phone
- MSDN: How to Use Choosers for Windows Phone
- James Ashley: Re-examining WP7 Launchers and Choosers – A nice overview of the available tasks in WP7 RTM, including their additional behaviors (like whether they defer tombstoning etc.).
- App Hub: Launchers and Choosers
- Jesse Liberty has some posts that involve launchers and choosers: Part 1, Part 2, Part 3, Part 4
- Channel 9: Windows Phone 7 Jump Start Part 9 – Launchers and Choosers
Plan for and implement multitouch and gestures
"This objective may include but is not limited to: manipulation events (ManipulationStarted, ManipulationCompleted, ManipulationDelta)"
Touch input and gestures are the primary and most important input method for most applications on the Windows Phone platform. The screens of the current devices support up to four simultaneous touch points at the same time which allow for pretty complex and sophisticated gestures (although the recommendation of course is to keep it as simple as possible).
The fundamental pieces involved in multitouch gestures in Silverlight are the ManipulationStarted, ManipulationDelta and ManipulationCompleted events of the UIElement class. Along with these events, you receive quite some information in the provided event arguments about translation and scaling manipulations, like the origin, the delta since the last event, the cumulated manipulation amount, and the rate of change (velocity). This makes it easy to create simple drag and drop functionality and zoom features which use two fingers, and to create your own simple gestures. Gestures are just a name for the high-level interpretation of the data that is provided by that kind of touch input.
Another way of dealing with touch input is to make use of the XNA classes for this, which is also a very legitimate approach in Silverlight applications. A very prominent example of this is the Silverlight Toolkit for Windows Phone, which provides a GestureListener class that makes use of XNA's touch input possibilities to expose common gestures like flicks to Silverlight. Note that if you want to work with multiple touch points and create your own gestures for two or more fingers you have to work with these XNA classes, as Silverlight does not support this at the moment.
- MSDN: Gesture Support for Windows Phone
- MSDN: How to Handle Manipulation Events
- MSDN: Detecting Gestures on a Multitouch Screen (Windows Phone) – Explains the basics of XNA-based touch input.
- Codeplex: Silverlight Toolkit for Windows Phone – Provides a GestureListener class that demonstrates how to work with XNA's TouchPanel class in Silverlight.
- App Hub: Touch Input – Explains the involved Silverlight manipulation events in more detail and shows how to handle them, among other possibilities for touch input.
- User Experience Design Guide: Gestures for Windows Phone – Explains the different kinds of well-known gestures in great detail.
- User Experience Design Guide: Interactions and Usability with Windows Phone – Details the size requirements of touch targets and gives further recommendations on the topic.
Design and implement application navigation
"This objective may include but is not limited to: pass parameters (NavigationContext API), manipulate the navigation stack (NavigationService API), use of the Back button, PhoneApplicationPage class and PhoneApplicationFrame class and the difference between these two classes"
The whole application concept on Windows Phone is constructed around a page-based navigation approach, which means that the user navigates between different parts of your application, and even between multiple applications and built-in system features on the basis of pages as the minimum unit. The PhoneApplicationFrame is the top-level container in this concept and hosts and navigates between PhoneApplicationPages, one at a time. The frame therefore has some helper properties and methods for that navigation, like the Navigate and GoBack methods. It is also the place to determine whether the application is currently covered by system dialogs or similar UI (e.g. by incoming phone calls or the lock screen) through the Obscured/Unobscured events.
Pages on the other hand have extension points that allow you to hook into the navigation process, like the OnNavigatedFrom and OnNavigatedTo overrides, for example. In addition, they provide the NavigationContext and NavigationService properties. The latter one conveniently provides similar methods, properties and events like the frame, so you can use navigation features from within a page easily. The navigation context on the other hand provides access to the query string used in the navigation to the current page, which can be used as a mechanism to pass arguments between pages. Other important things controlled on a per-page level (through properties of the page) are e.g. the visibility of the application bar, the supported screen orientation and how automated caching of page content happens.
It's important to know that the runtime keeps track of how the user navigates between your application pages and re-activates the previous page automatically when they press the Back hardware button (unless you cancel that back navigation). This is called the "navigation back-stack". Make sure you fully understand this mechanism correctly and design your application in a way that integrates nicely with this concept, or you can end up with e.g. loops in your application's navigation behavior easily, which in turn leads to failure of application validation for the marketplace.
- MSDN: Frame and Page Navigation Overview for Windows Phone
- MSDN: Technical Certification Requirements – Section 5.2.4ff contains the requirements regarding the Back button.
- App Hub: Navigation – A detailed example that also shows how to pass data between pages using query strings.
- Peter Torr: Introducing the concept of "Places" – Still a great reference that helps with the general application layout on the phone to avoid the most common mistakes with regards to navigation problems on Windows Phone.
- Andrea Boschin: Understanding navigation – Another article on navigation and passing data between pages.
- The Windows Phone Developer Blog: Solving Circular Navigation – Provides a recipe for resolving loops in application navigation. A nice sample to learn about how the back-stack can be manipulated.
Summary
This part of the exam preparation was all about working with the platform and its APIs. Using sensors and touch input is really made easy by the available APIs, but that doesn't mean that you cannot create complex applications using it. Tasks seem equally easy to work with at first, but they have some subtle requirements for choosers you need to match; in the next part we'll also see how the application life cycle needs to be taken into consideration for them. Finally, the navigation concept is something you should be able to work with blindfolded – it's really one of the fundamental features of the platform.