The Windows Phone 8 platform has its own layout styles that make it stand out from any other mobile platform. These unique layouts are possible thanks to a few convenient built-in controls of the Windows Phone 8 SDK. The controls that we're going to be looking at in this tutorial are the Pivot and Panorama controls.
1. Panorama
What is it?
The Windows Phone Panorama layout control offers a unique approach in terms of how content is presented to the user. A Panorama consists of multiple panels or panorama items that each represent a page. At any one time, only one panorama item is visible alongside a small portion of the previous or next panorama item. Using the Panorama control feels like peeking through a keyhole, you can see part of the room behind the door, but not the entire room.
The above screenshot is a great example to demonstrate the Panorama control. The above example contains five panorama items. Each panorama item represent a page with content.
In the above screenshot, the active panorama has a title of menu. At the same time, we can see a glimpse of the next panorama item titled featured. The Panorama control shows the user that more content is waiting to be discovered on the right. Let's find out how to use the Panorama control.
Creating a Panorama Control
Start by creating a new Windows Phone project. To add a Panorama control to the project, choose Add New Item > Windows Phone Panorama Page > Add from the Project menu. This should add a Panorama control with two panorama items. The Panorama control should be visible in Visual Studio's design view.
Let's add some content to the Panorama control. We're going to populate the first panorama item with a list of colors and the second panorama item with a number of colored rectangles that correspond with the list of colors of the first panorama item. Right now, the Panorama control contains a Grid control with a name of LayoutRoot as shown below.
The Grid control named LayoutRoot is the main container of the current page of our application, holding every other element of the page. Remember that in XAML controls are structured in hierarchically, very similar to XML.
The Panorama control is nested in the Grid control and has a Title property of "my application". The Panorama control contains the panorama items. As you can see in the above XAML snippet, the Panorama control currently contains two panorama items. The Header property of the panorama items is item1 and item2 respectively.
The Header property of a panorama item is similar to the Title property of the Panorama control and you can change them to whatever you like.
Adding Colors
Let's now populate the panorama items with some content like we've discussed earlier. Update the content of the two panorama items as shown below.
As you can see I've changed the Header property of the panorama items to color names and colors respectively. To the first panorama item, I've added a StackPanel control containing five TextBlock controls. Each of the TextBlock controls has its Text property set to the name color. I've also set the FontSize property of each TextBlock control to 30px to make the text larger.
The second PanoramaItem control also contains a StackPanel control, containing five Rectangle controls. Each Rectangle control is filled with a color listed in the first panorama item using its Fill property. The Height property of the rectangles is set to 50px and the Margin property is set to 0, 0, 0, 10, which translates to a bottom margin of 10px. You can see the result of your work in the design view of your IDE as shown below.
Now that we've populated the Panorama control with some content, it's time to focus on the second control of this tutorial, the Pivot control.
2. Pivot
What is it?
The Pivot control is another way to present content to the user, unique to the Windows Phone platform. The Pivot control is similar to the Panorama control in some ways, but it has a number of features that set it apart.
Like a Panorama control, a Pivot control can consist of multiple PivotItem controls. Each pivot item can contain other controls, such as Grid and StackPanel controls. The above screenshot shows a Pivot control with two PivotItem controls, directory and facility.
While the Panorama control shows a sneak peek of the next page, the Pivot control does the same for the Header at the top of the Pivot control. This is clearly shown in the above example in which you can see the first letters of the word facility, the title of the second pivot item. To illustrate that the second pivot item is not in focus, its title is greyed out.
Creating a Pivot Control
Let's create a Pivot control by following the same steps we took to create a Panorama control. Instead, select the Windows Phone Pivot Page option. Like the Panorama control, populate the Pivot control with the list of colors and their names as we did earlier. The resulting XAML code for the Pivot control should look similar to what is shown below.
Before we can build and run the application to see both controls in action, we need to implement a way to navigate the application's pages. That will be the focus of the next section.
3. Page Navigation
If you run the application in its current form, you will see the MainPage.xaml page, the default entry point for every Windows Phone application. Let's change this.
Adding Buttons
To navigate to the Panorama and Pivot control we implemented earlier, we need to add two Button controls to the MainPage.xaml page. Double-click MainPage.xaml in your IDE and drag two Button controls from the Toolbox to the page in Visual Studio's design view.
As you can see below, I've also changed the Content properties of the Button controls to read Panorama and Pivot.
Implementing the Button Controls
When the use taps a Button control, we want the application to navigate to either the Panorama or the Pivot control. Let's start with the left button first.
Panorama
Start by double-clicking the left Button control in the design view. This should take you to MainPage.cs, which contains the class that is linked to MainPage.xaml. Visual Studio has already created a method for us, Button_Click, which is invoked when the user taps the button labeled Panorama.
When the user taps the first button, the application should take them to the Panorama control. We accomplish this by updating the Button_Click method as shown below.
We invoke the Navigate method on NavigationService, passing in the destination, a Uri instance, and the type of destination, UriKind.Relative. Note that the name of the destination page needs to match the first page of the Panorama control, PanoramaPage1.xaml in the above example. Don't forget the leading forward slash.
Pivot
Navigating to the Pivot control is very similar. Open MainPage.xaml, double-click the Button control labeled Pivot, and implement the event handler, Button_Click_1, as shown below. The only difference is the destination we navigate to, PivotPage1.xaml.
Build and run the application to test the buttons as well as the Panorama and Pivot controls. Use the physical back button of your device or emulator to navigate back to the previous page.
Conclusion
In this tutorial, we've covered two important layout controls of the Windows Phone platform, the Panorama and Pivot controls. We also revisited navigation and used some of the common controls on Windows Phone, such as Button, TextBox, and Rectangle controls. In the next article, we will conclude this introductory series on Windows Phone and look ahead at what's next for you.
2014-10-01T17:00:30.000Z2014-10-01T17:00:30.000ZSani Yusuf
Developing a mobile product requires hard work and the results after you launched your application is uncertain.
Luckily, there's a method to test possible app ideas you have. You can gain a lot of knowledge by building a minimum viable product. The value of MVPs is that you have to spend less time designing and developing. As a result, you'll get a lot of insight early on. This improves the quality of the decisions you make for your product.
In this tutorial, you'll learn:
what a minimum viable product or MVP is
how to define your MVP
applying MVP best practices
building an MVP
launching an MVP and receiving user feedback
1. What Is an MVP?
An MVP, or minimum viable product, is a product that has just enough features to test if it is viable in the market. To achieve this, all unnecessary features are stripped away and the application only contains the features that are deemed the core of the product.
Minimum: This, as described previously, means that the product just contains core features and everything that isn't a must-have is stripped away.
Viable: This means that the product has the opportunity to get traction and that it creates value for people. Value is a broad definition. For example, a game provides entertainment, which is value. Usually we consider that a product is viable if it can generate enough revenue to be worthwhile the cost of developing the product.
Product: Of course, you're building a product. You're producing a digital good for people to use.
When you have a vision for a product, it's often complex. You want users to achieve a variety of things using your app. Though, the actual core of a product is often pretty small and simple.
A great example of an MVP would be Snapchat. Snapchat has one particular focus, you can view and send images to other users, but the images you send are only temporarily visible. It's a simple product with a focus. They tested this core concept and it succeeded. Only after the initial launch and product validation, it made sense to start working on more features.
Many of the applications we know are far beyond the scope of an MVP. Let's take Instagram for example. Initially, the MVP could have focused on just filters. You would be able to take a photo, choose an existing photo, and put one of the, let's say five, filters on the photo and save it back to the device's camera roll.
Releasing Instagram as a minimum viable product would have tested the assumption that people use filters to improve their photos. If the app gains traction, you can work on updates, such as profiles and video support. If it doesn't work out and you don't get traction, then it's probably not worth continuing development. Perhaps a different idea is more viable.
Working on MVPs means taking opportunity cost in account while working on a product. After all, failing early means that it saves you time to build a successful product by stopping product development early when a product fails.
Define the core of your product. Build that first set of features and test it on the market.
Developing every possible feature you have in your mind can take months while a simple MVP can take only a couple of weeks to create.
Releasing early has another advantage, user feedback. You're able to collect user feedback early on and you can shape the product based on what your users want.
2. Defining Your MVP
Before you can actually start developing, you need to define your MVP and product roadmap. What features are must-haves and which ones are nice-to-haves? It's very important to stay as objective as possible during this process. A feature you deeply care about might not be the core of the product. Decide if a feature is a must-have or a nice-to-have.
A typical nice-to-have is forgot password functionality. Instead, you could show a support email address. Once you have traction, you can improve this feature and build a proper forgot password flow. At this phase, it's all about limiting the time it takes to get to market.
Write down the feature set of your product in a document. Basically, you write down all the features of your product in detail. This is a working document and gives you an overview of what you will be creating. It's also a useful document to brief designers and investors for example. It's putting your vision on paper. You can also briefly mention for yourself how you accomplish a feature in a technical sense. Feature sets usually include a technical scope. This is especially useful if you work on a project with multiple developers.
The next step would be listing your features in terms of priority. What's the most important feature and which one creates the most value in short term? Once you've defined that, you can put the remaining features on a product roadmap to define what you'll be building once your product takes off.
To better understand your product's feature set is to rate every feature on a scale of 1 to 10, taking into account product importance, complexity, and added value for the user. You can make better decisions in terms of the timeline for your product when you understand the various components of each feature.
3. MVP Best Practices
These are my personal reminders when I define a minimum viable product.
When you have an idea, take a look at the existing market. What similar products are out there? What is their value proposition? How would you do it differently or, even more important, better?
Once I've finished writing a feature set, I always review every feature and ask myself if it's truly necessary. Does the user really need to create an account? Can we drop features so we can avoid building a backend?
Second opinions are very valuable when scoping an MVP.
Are there APIs, SDKs, or frameworks available that can do some of the work for me?
For product roadmaps, I plan one release ahead and I try to keep roadmaps short-term as they'll be strongly influenced by user feedback.
Do proper technical research once you've finished your feature set. No one likes surprises when they're developing a product.
Talk about your idea. There's a lot of value in feedback.
When collaborating with designers, ask them to stick as much as possible to iOS standards. Try to reduce the amount of animations in the product.
4. Building an MVP
Every developer or team has different preferences about how to build a product. I'll keep it brief, build the product the way you like and don't lose sight of what you initially defined as the minimum viable product.
It's okay to say no.
Be aware of feature creep, especially if multiple stakeholders are involved. A lot of suggestions you have for features can be included in a next release. As long as you continue to revisit the feature set and make smart decisions based on the initial product vision and information you might gain along the way, you'll stay on track.
Quality assurance of the product is another important step of building an MVP. Assure that your product simply works. Spend enough time to do bug fixing. If you're a solo developer, consider a small private beta with friends and family. If you have a budget, then hiring a QA firm can also be an effective solution to keep your product free of critical bugs that could harm the product's launch.
5. Product Launch & User Feedback
Well done! You've finished building your product. Now is when the real work starts. After you've finished developing, these are your next short-term priorities:
get traction for your product
get feedback on your initial product
identify flaws, such as bugs, product issues, and missing features
identify your product's strengths
Marketing your new app is not always easy. Here's a tutorial to help you if you need help getting that initial traction. Once you have a first set of users, your priorities shift once again. Now it's all about:
asking feedback from your user base
analyzing user feedback
updating the product roadmap and continue developing
It's not easy to get feedback from users. Your rating on the App Store and user reviews tell you something, but the trick is to get some more in-depth feedback. It's important to always be available to your users. Have a Twitter account, include your contact information in the app, and don't be afraid to be proactive by reaching out to your users.
If you've worked with testers for your product, then you already have a list of people you can have a conversation with.
Whenever you get feedback, it's important to analyze it. Be aware that when something is bad, there's a bigger chance people speak up rather than when they like something. The feedback you get might be bad, but that doesn't necessarily mean your product is bad.
To evaluate if your product is viable, use app analytics, such as Mixpanel to keep track of user activity and retention.
Statistics define if a product is viable, not user feedback.
Compare the user feedback to your original product vision and product roadmap. The most difficult part is to define how this feedback should shape the product vision and that's a choice every product owner needs to make for himself.
Conclusion
Well done! You've learned about minimum viable products, how they make product development more efficient, and how you're able to make better decisions after your product is launched.
A final tip I want to give is probably the most important lesson, know when your MVP is not viable. Deciding not to pursue a product idea is probably one of the hardest decisions a developer has to make, but there's no doubt in my mind it will sometimes happen when you create products. Statistics are extremely valuable post-launch and will help you make data driven decisions.
I'm eager to hear about your experiences building products so far. Please share them in the comments. Any questions and feedback are welcome as well.
Developing a mobile product requires hard work and the results after you launched your application is uncertain.
Luckily, there's a method to test possible app ideas you have. You can gain a lot of knowledge by building a minimum viable product. The value of MVPs is that you have to spend less time designing and developing. As a result, you'll get a lot of insight early on. This improves the quality of the decisions you make for your product.
In this tutorial, you'll learn:
what a minimum viable product or MVP is
how to define your MVP
applying MVP best practices
building an MVP
launching an MVP and receiving user feedback
1. What Is an MVP?
An MVP, or minimum viable product, is a product that has just enough features to test if it is viable in the market. To achieve this, all unnecessary features are stripped away and the application only contains the features that are deemed the core of the product.
Minimum: This, as described previously, means that the product just contains core features and everything that isn't a must-have is stripped away.
Viable: This means that the product has the opportunity to get traction and that it creates value for people. Value is a broad definition. For example, a game provides entertainment, which is value. Usually we consider that a product is viable if it can generate enough revenue to be worthwhile the cost of developing the product.
Product: Of course, you're building a product. You're producing a digital good for people to use.
When you have a vision for a product, it's often complex. You want users to achieve a variety of things using your app. Though, the actual core of a product is often pretty small and simple.
A great example of an MVP would be Snapchat. Snapchat has one particular focus, you can view and send images to other users, but the images you send are only temporarily visible. It's a simple product with a focus. They tested this core concept and it succeeded. Only after the initial launch and product validation, it made sense to start working on more features.
Many of the applications we know are far beyond the scope of an MVP. Let's take Instagram for example. Initially, the MVP could have focused on just filters. You would be able to take a photo, choose an existing photo, and put one of the, let's say five, filters on the photo and save it back to the device's camera roll.
Releasing Instagram as a minimum viable product would have tested the assumption that people use filters to improve their photos. If the app gains traction, you can work on updates, such as profiles and video support. If it doesn't work out and you don't get traction, then it's probably not worth continuing development. Perhaps a different idea is more viable.
Working on MVPs means taking opportunity cost in account while working on a product. After all, failing early means that it saves you time to build a successful product by stopping product development early when a product fails.
Define the core of your product. Build that first set of features and test it on the market.
Developing every possible feature you have in your mind can take months while a simple MVP can take only a couple of weeks to create.
Releasing early has another advantage, user feedback. You're able to collect user feedback early on and you can shape the product based on what your users want.
2. Defining Your MVP
Before you can actually start developing, you need to define your MVP and product roadmap. What features are must-haves and which ones are nice-to-haves? It's very important to stay as objective as possible during this process. A feature you deeply care about might not be the core of the product. Decide if a feature is a must-have or a nice-to-have.
A typical nice-to-have is forgot password functionality. Instead, you could show a support email address. Once you have traction, you can improve this feature and build a proper forgot password flow. At this phase, it's all about limiting the time it takes to get to market.
Write down the feature set of your product in a document. Basically, you write down all the features of your product in detail. This is a working document and gives you an overview of what you will be creating. It's also a useful document to brief designers and investors for example. It's putting your vision on paper. You can also briefly mention for yourself how you accomplish a feature in a technical sense. Feature sets usually include a technical scope. This is especially useful if you work on a project with multiple developers.
The next step would be listing your features in terms of priority. What's the most important feature and which one creates the most value in short term? Once you've defined that, you can put the remaining features on a product roadmap to define what you'll be building once your product takes off.
To better understand your product's feature set is to rate every feature on a scale of 1 to 10, taking into account product importance, complexity, and added value for the user. You can make better decisions in terms of the timeline for your product when you understand the various components of each feature.
3. MVP Best Practices
These are my personal reminders when I define a minimum viable product.
When you have an idea, take a look at the existing market. What similar products are out there? What is their value proposition? How would you do it differently or, even more important, better?
Once I've finished writing a feature set, I always review every feature and ask myself if it's truly necessary. Does the user really need to create an account? Can we drop features so we can avoid building a backend?
Second opinions are very valuable when scoping an MVP.
Are there APIs, SDKs, or frameworks available that can do some of the work for me?
For product roadmaps, I plan one release ahead and I try to keep roadmaps short-term as they'll be strongly influenced by user feedback.
Do proper technical research once you've finished your feature set. No one likes surprises when they're developing a product.
Talk about your idea. There's a lot of value in feedback.
When collaborating with designers, ask them to stick as much as possible to iOS standards. Try to reduce the amount of animations in the product.
4. Building an MVP
Every developer or team has different preferences about how to build a product. I'll keep it brief, build the product the way you like and don't lose sight of what you initially defined as the minimum viable product.
It's okay to say no.
Be aware of feature creep, especially if multiple stakeholders are involved. A lot of suggestions you have for features can be included in a next release. As long as you continue to revisit the feature set and make smart decisions based on the initial product vision and information you might gain along the way, you'll stay on track.
Quality assurance of the product is another important step of building an MVP. Assure that your product simply works. Spend enough time to do bug fixing. If you're a solo developer, consider a small private beta with friends and family. If you have a budget, then hiring a QA firm can also be an effective solution to keep your product free of critical bugs that could harm the product's launch.
5. Product Launch & User Feedback
Well done! You've finished building your product. Now is when the real work starts. After you've finished developing, these are your next short-term priorities:
get traction for your product
get feedback on your initial product
identify flaws, such as bugs, product issues, and missing features
identify your product's strengths
Marketing your new app is not always easy. Here's a tutorial to help you if you need help getting that initial traction. Once you have a first set of users, your priorities shift once again. Now it's all about:
asking feedback from your user base
analyzing user feedback
updating the product roadmap and continue developing
It's not easy to get feedback from users. Your rating on the App Store and user reviews tell you something, but the trick is to get some more in-depth feedback. It's important to always be available to your users. Have a Twitter account, include your contact information in the app, and don't be afraid to be proactive by reaching out to your users.
If you've worked with testers for your product, then you already have a list of people you can have a conversation with.
Whenever you get feedback, it's important to analyze it. Be aware that when something is bad, there's a bigger chance people speak up rather than when they like something. The feedback you get might be bad, but that doesn't necessarily mean your product is bad.
To evaluate if your product is viable, use app analytics, such as Mixpanel to keep track of user activity and retention.
Statistics define if a product is viable, not user feedback.
Compare the user feedback to your original product vision and product roadmap. The most difficult part is to define how this feedback should shape the product vision and that's a choice every product owner needs to make for himself.
Conclusion
Well done! You've learned about minimum viable products, how they make product development more efficient, and how you're able to make better decisions after your product is launched.
A final tip I want to give is probably the most important lesson, know when your MVP is not viable. Deciding not to pursue a product idea is probably one of the hardest decisions a developer has to make, but there's no doubt in my mind it will sometimes happen when you create products. Statistics are extremely valuable post-launch and will help you make data driven decisions.
I'm eager to hear about your experiences building products so far. Please share them in the comments. Any questions and feedback are welcome as well.
In the previous article about iOS 8 and Core Data, we discussed batch updates. Batch updates aren't the only new API in town. As of iOS 8 and OS X Yosemite, it's possible to asynchronously fetch data. In this tutorial, we'll take a closer look at how to implement asynchronous fetching and in what situations your application can benefit from this new API.
1. The Problem
Like batch updates, asynchronous fetching has been on the wish list of many developers for quite some time. Fetch requests can be complex, taking a non-trivial amount of time to complete. During that time the fetch request blocks the thread it's running on and, as a result, blocks access to the managed object context executing the fetch request. The problem is simple to understand, but what does Apple's solution look like.
2. The Solution
Apple's answer to this problem is asynchronous fetching. An asynchronous fetch request runs in the background. This means that it doesn't block other tasks while it's being executed, such as updating the user interface on the main thread.
Asynchronous fetching also sports two other convenient features, progress reporting and cancellation. An asynchronous fetch request can be cancelled at any time, for example, when the user decides the fetch request takes too long to complete. Progress reporting is a useful addition to show the user the current state of the fetch request.
Asynchronous fetching is a flexible API. Not only is it possible to cancel an asynchronous fetch request, it's also possible to make changes to the managed object context while the asynchronous fetch request is being executed. In other words, the user can continue to use your application while the application executes an asynchronous fetch request in the background.
3. How Does It Work?
Like batch updates, asynchronous fetch requests are handed to the managed object context as an NSPersistentStoreRequest object, an instance of the NSAsynchronousFetchRequest class to be precise.
An NSAsynchronousFetchRequest instance is initialized with an NSFetchRequest object and a completion block. The completion block is executed when the asynchronous fetch request has completed its fetch request.
Let's revisit the to-do application we created earlier in this series and replace the current implementation of the NSFetchedResultsController class with an asynchronous fetch request.
Step 1: Project Setup
Download or clone the project from GitHub and open it in Xcode 6. Before we can start working with the NSAsynchronousFetchRequest class, we need to make some changes. We won't be able to use the NSFetchedResultsController class for managing the table view's data since the NSFetchedResultsController class was designed to run on the main thread.
Step 2: Replacing the Fetched Results Controller
Start by updating the private class extension of the TSPViewController class as shown below. We remove the fetchedResultsController property and create a new property, items, of type NSArray for storing the to-do items. This also means that the TSPViewController class no longer needs to conform to the NSFetchedResultsControllerDelegate protocol.
Before we refactor the viewDidLoad method, I first want to update the implementation of the UITableViewDataSource protocol. Take a look at the changes I've made in the following code blocks.
We also need to change one line of code in the prepareForSegue:sender: method as shown below.
// Fetch Record
NSManagedObject *record = [self.items objectAtIndex:self.selection.row];
Last but not least, delete the implementation of the NSFetchedResultsControllerDelegate protocol since we no longer need it.
Step 3: Creating the Asynchronous Fetch Request
As you can see below, we create the asynchronous fetch request in the view controller's viewDidLoad method. Let's take a moment to see what's going on.
We start by creating and configuring an NSFetchRequest instance to initialize the asynchronous fetch request. It's this fetch request that the asynchronous fetch request will execute in the background.
The completion block is invoked when the asynchronous fetch request has completed executing its fetch request. The completion block takes one argument of type NSAsynchronousFetchResult, which contains the result of the query as well as a reference to the original asynchronous fetch request.
In the completion block, we invoke processAsynchronousFetchResult:, passing in the NSAsynchronousFetchResult object. We'll take a look at this helper method in a few moments.
Executing the asynchronous fetch request is almost identical to how we execute an NSBatchUpdateRequest. We call executeRequest:error: on the managed object context, passing in the asynchronous fetch request and a pointer to an NSError object.
Note that we execute the asynchronous fetch request by calling performBlock: on the managed object context. While this isn't strictly necessary since the viewDidLoad method, in which we create and execute the asynchronous fetch request, is called on the main thread, it's a good habit and best practice to do so.
Even though the asynchronous fetch request is executed in the background, note that the executeRequest:error: method returns immediately, handing us an NSAsynchronousFetchResult object. Once the asynchronous fetch request completes, that same NSAsynchronousFetchResult object is populated with the result of the fetch request.
Finally, we check if the asynchronous fetch request was executed without issues by checking if the NSError object is equal to nil.
Step 4: Processing the Asynchronous Fetch Result
The processAsynchronousFetchResult: method is nothing more than a helper method in which we process the result of the asynchronous fetch request. We set the view controller's items property with the contents of the result's finalResult property and reload the table view.
Build the project and run the application in the iOS Simulator. You may be surprised to see your application crash when it tries to execute the asynchronous fetch request. Fortunately, the output in the console tells us what went wrong.
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'NSConfinementConcurrencyType context <NSManagedObjectContext: 0x7fce3a731e60> cannot support asynchronous fetch request <NSAsynchronousFetchRequest: 0x7fce3a414300> with fetch request <NSFetchRequest: 0x7fce3a460860> (entity: TSPItem; predicate: ((null)); sortDescriptors: ((
"(createdAt, ascending, compare:)"
)); type: NSManagedObjectResultType; ).'
If you haven't read the article about Core Data and concurrency, you may be confused by what you're reading. Remember that Core Data declares three concurrency types, NSConfinementConcurrencyType, NSPrivateQueueConcurrencyType, and NSMainQueueConcurrencyType. Whenever you create a managed object context by invoking the class's init method, the resulting managed object context's concurrency type is equal to NSConfinementConcurrencyType. This is the default concurrency type.
The problem, however, is that asynchronous fetching is incompatible with the NSConfinementConcurrencyType type. Without going into too much detail, it's important to know that the asynchronous fetch request needs to merge the results of its fetch request with the managed object context that executed the asynchronous fetch request. It needs to know on which dispatch queue it can do this and that is why only NSPrivateQueueConcurrencyType and NSMainQueueConcurrencyType support asynchronous fetching. The solution is very simple though.
Step 6: Configuring the Managed Object Context
Open TSPAppDelegate.m and update the managedObjectContext method as shown below.
The only change we've made is replacing the init method with initWithConcurrencyType:, passing in NSMainQueueConcurrencyType as the argument. This means that the managed object context should only be accessed from the main thread. This works fine as long as we use the performBlock: or performBlockAndWait: methods to access the managed object context.
Run the project one more time to make sure that our change has indeed fixed the problem.
4. Showing Progress
The NSAsynchronousFetchRequest class adds support for monitoring the progress of the fetch request and it's even possible to cancel an asynchronous fetch request, for example, if the user decides that it's taking too long to complete.
The NSAsynchronousFetchRequest class leverages the NSProgress class for progress reporting as well as canceling an asynchronous fetch request. The NSProgress class, available since iOS 7 and OS X 10.9, is a clever way to monitor the progress of a task without the need to tightly couple the task to the user interface.
The NSProgress class also support cancelation, which is how an asynchronous fetch request can be canceled. Let's find out what we need to do to implement progress reporting for the asynchronous fetch request.
Step 1: Adding SVProgressHUD
We'll show the user the progress of the asynchronous fetch request using Sam Vermette's SVProgressHUD library. Download the library from GitHub and add the SVProgressHUD folder to your Xcode project.
Step 2: Setting Up NSProgress
In this article, we won't explore the NSProgress class in much detail, but feel free to read more about it in the documentation. We create an NSProgress instance in the block we hand to the performBlock: method in the view controller's viewDidLoad method.
// Create Progress
NSProgress *progress = [NSProgress progressWithTotalUnitCount:1];
// Become Current
[progress becomeCurrentWithPendingUnitCount:1];
You may be surprised that we set the total unit count to 1. The reason is simple. When Core Data executes the asynchronous fetch request, it doesn't know how many records it will find in the persistent store. This also means that we won't be able to show the relative progress to the user—a percentage. Instead, we will show the user the absolute progress—the number of records it has found.
You could remedy this issue by performing a fetch request to fetch the number of records before you execute the asynchronous fetch request. I prefer not to do this, though, because this also means that fetching the records from the persistent store takes longer to complete because of the extra fetch request at the start.
Step 3: Adding an Observer
When we execute the asynchronous fetch request, we are immediately handed an NSAsynchronousFetchResult object. This object has a progress property, which is of type NSProgress. It's this progress property that we need to observe if we want to receive progress updates.
Note that we call resignCurrent on the progress object to balance the earlier becomeCurrentWithPendingUnitCount: call. Keep in mind that both of these methods need to be invoked on the same thread.
Step 4: Removing the Observer
In the completion block of the asynchronous fetch request, we remove the observer and dismiss the progress HUD.
Before we implement observeValueForKeyPath:ofObject:change:context:, we need to add an import statement for the SVProgressHUD library, declare the static variable ProgressContext that we pass in as the context when adding and removing the observer, and show the progress HUD before creating the asynchronous fetch request.
All that's left for us to do, is implement the observeValueForKeyPath:ofObject:change:context: method. We check if context is equal to ProgressContext, create a status object by extracting the number of completed records from the change dictionary, and update the progress HUD. Note that we update the user interface on the main thread.
If we want to properly test our application, we need more data. While I don't recommend using the following approach in a production application, it's a quick and easy way to populate the database with data.
Open TSPAppDelegate.m and update the application:didFinishLaunchingWithOptions: method as shown below. The populateDatabase method is a simple helper method in which we add dummy data to the database.
The implementation is straightforward. Because we only want to insert dummy data once, we check the user defaults database for the key @"didPopulateDatabase". If the key isn't set, we insert dummy data.
- (void)populateDatabase {
// Helpers
NSUserDefaults *ud = [NSUserDefaults standardUserDefaults];
if ([ud objectForKey:@"didPopulateDatabase"]) return;
for (NSInteger i = 0; i < 1000000; i++) {
// Create Entity
NSEntityDescription *entity = [NSEntityDescription entityForName:@"TSPItem" inManagedObjectContext:self.managedObjectContext];
// Initialize Record
NSManagedObject *record = [[NSManagedObject alloc] initWithEntity:entity insertIntoManagedObjectContext:self.managedObjectContext];
// Populate Record
[record setValue:[NSString stringWithFormat:@"Item %li", (long)i] forKey:@"name"];
[record setValue:[NSDate date] forKey:@"createdAt"];
}
// Save Managed Object Context
[self saveManagedObjectContext];
// Update User Defaults
[ud setBool:YES forKey:@"didPopulateDatabase"];
}
The number of records is important. If you plan to run the application on the iOS Simulator, then it's fine to insert 100,000 or 1,000,000 records. This won't work as good on a physical device and will take too long to complete.
In the for loop, we create a managed object and populate it with data. Note that we don't save the changes of the managed object context during each iteration of the for loop.
Finally, we update the user defaults database to make sure the database isn't populated the next time the application is launched.
Great. Run the application in the iOS Simulator to see the result. You'll notice that it takes a few moments for the asynchronous fetch request to start fetching records and update the progress HUD.
6. Breaking Changes
By replacing the fetched results controller class with an asynchronous fetch request, we have broken a few pieces of the application. For example, tapping the checkmark of a to-do item doesn't seem to work any longer. While the database is being updated, the user interface doesn't reflect the change. The solution is fairly easy to fix and I'll leave it up to you to implement a solution. You should now have enough knowledge to understand the problem and find a suitable solution.
Conclusion
I'm sure you agree that asynchronous fetching is surprisingly easy to use. The heavy lifting is done by Core Data, which means that there's no need to manually merge the results of the asynchronous fetch request with the managed object context. Your only job is to update the user interface when the asynchronous fetch request hands you its results. Together with batch updates, it's a great addition to the Core Data framework.
This article also concludes this series on Core Data. You have learned a lot about the Core Data framework and you know all the essentials to use Core Data in a real application. Core Data is a powerful framework and, with the release of iOS 8, Apple has shown us that it gets better every year.
In the previous article about iOS 8 and Core Data, we discussed batch updates. Batch updates aren't the only new API in town. As of iOS 8 and OS X Yosemite, it's possible to asynchronously fetch data. In this tutorial, we'll take a closer look at how to implement asynchronous fetching and in what situations your application can benefit from this new API.
1. The Problem
Like batch updates, asynchronous fetching has been on the wish list of many developers for quite some time. Fetch requests can be complex, taking a non-trivial amount of time to complete. During that time the fetch request blocks the thread it's running on and, as a result, blocks access to the managed object context executing the fetch request. The problem is simple to understand, but what does Apple's solution look like.
2. The Solution
Apple's answer to this problem is asynchronous fetching. An asynchronous fetch request runs in the background. This means that it doesn't block other tasks while it's being executed, such as updating the user interface on the main thread.
Asynchronous fetching also sports two other convenient features, progress reporting and cancellation. An asynchronous fetch request can be cancelled at any time, for example, when the user decides the fetch request takes too long to complete. Progress reporting is a useful addition to show the user the current state of the fetch request.
Asynchronous fetching is a flexible API. Not only is it possible to cancel an asynchronous fetch request, it's also possible to make changes to the managed object context while the asynchronous fetch request is being executed. In other words, the user can continue to use your application while the application executes an asynchronous fetch request in the background.
3. How Does It Work?
Like batch updates, asynchronous fetch requests are handed to the managed object context as an NSPersistentStoreRequest object, an instance of the NSAsynchronousFetchRequest class to be precise.
An NSAsynchronousFetchRequest instance is initialized with an NSFetchRequest object and a completion block. The completion block is executed when the asynchronous fetch request has completed its fetch request.
Let's revisit the to-do application we created earlier in this series and replace the current implementation of the NSFetchedResultsController class with an asynchronous fetch request.
Step 1: Project Setup
Download or clone the project from GitHub and open it in Xcode 6. Before we can start working with the NSAsynchronousFetchRequest class, we need to make some changes. We won't be able to use the NSFetchedResultsController class for managing the table view's data since the NSFetchedResultsController class was designed to run on the main thread.
Step 2: Replacing the Fetched Results Controller
Start by updating the private class extension of the TSPViewController class as shown below. We remove the fetchedResultsController property and create a new property, items, of type NSArray for storing the to-do items. This also means that the TSPViewController class no longer needs to conform to the NSFetchedResultsControllerDelegate protocol.
Before we refactor the viewDidLoad method, I first want to update the implementation of the UITableViewDataSource protocol. Take a look at the changes I've made in the following code blocks.
We also need to change one line of code in the prepareForSegue:sender: method as shown below.
// Fetch Record
NSManagedObject *record = [self.items objectAtIndex:self.selection.row];
Last but not least, delete the implementation of the NSFetchedResultsControllerDelegate protocol since we no longer need it.
Step 3: Creating the Asynchronous Fetch Request
As you can see below, we create the asynchronous fetch request in the view controller's viewDidLoad method. Let's take a moment to see what's going on.
We start by creating and configuring an NSFetchRequest instance to initialize the asynchronous fetch request. It's this fetch request that the asynchronous fetch request will execute in the background.
The completion block is invoked when the asynchronous fetch request has completed executing its fetch request. The completion block takes one argument of type NSAsynchronousFetchResult, which contains the result of the query as well as a reference to the original asynchronous fetch request.
In the completion block, we invoke processAsynchronousFetchResult:, passing in the NSAsynchronousFetchResult object. We'll take a look at this helper method in a few moments.
Executing the asynchronous fetch request is almost identical to how we execute an NSBatchUpdateRequest. We call executeRequest:error: on the managed object context, passing in the asynchronous fetch request and a pointer to an NSError object.
Note that we execute the asynchronous fetch request by calling performBlock: on the managed object context. While this isn't strictly necessary since the viewDidLoad method, in which we create and execute the asynchronous fetch request, is called on the main thread, it's a good habit and best practice to do so.
Even though the asynchronous fetch request is executed in the background, note that the executeRequest:error: method returns immediately, handing us an NSAsynchronousFetchResult object. Once the asynchronous fetch request completes, that same NSAsynchronousFetchResult object is populated with the result of the fetch request.
Finally, we check if the asynchronous fetch request was executed without issues by checking if the NSError object is equal to nil.
Step 4: Processing the Asynchronous Fetch Result
The processAsynchronousFetchResult: method is nothing more than a helper method in which we process the result of the asynchronous fetch request. We set the view controller's items property with the contents of the result's finalResult property and reload the table view.
Build the project and run the application in the iOS Simulator. You may be surprised to see your application crash when it tries to execute the asynchronous fetch request. Fortunately, the output in the console tells us what went wrong.
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'NSConfinementConcurrencyType context <NSManagedObjectContext: 0x7fce3a731e60> cannot support asynchronous fetch request <NSAsynchronousFetchRequest: 0x7fce3a414300> with fetch request <NSFetchRequest: 0x7fce3a460860> (entity: TSPItem; predicate: ((null)); sortDescriptors: ((
"(createdAt, ascending, compare:)"
)); type: NSManagedObjectResultType; ).'
If you haven't read the article about Core Data and concurrency, you may be confused by what you're reading. Remember that Core Data declares three concurrency types, NSConfinementConcurrencyType, NSPrivateQueueConcurrencyType, and NSMainQueueConcurrencyType. Whenever you create a managed object context by invoking the class's init method, the resulting managed object context's concurrency type is equal to NSConfinementConcurrencyType. This is the default concurrency type.
The problem, however, is that asynchronous fetching is incompatible with the NSConfinementConcurrencyType type. Without going into too much detail, it's important to know that the asynchronous fetch request needs to merge the results of its fetch request with the managed object context that executed the asynchronous fetch request. It needs to know on which dispatch queue it can do this and that is why only NSPrivateQueueConcurrencyType and NSMainQueueConcurrencyType support asynchronous fetching. The solution is very simple though.
Step 6: Configuring the Managed Object Context
Open TSPAppDelegate.m and update the managedObjectContext method as shown below.
The only change we've made is replacing the init method with initWithConcurrencyType:, passing in NSMainQueueConcurrencyType as the argument. This means that the managed object context should only be accessed from the main thread. This works fine as long as we use the performBlock: or performBlockAndWait: methods to access the managed object context.
Run the project one more time to make sure that our change has indeed fixed the problem.
4. Showing Progress
The NSAsynchronousFetchRequest class adds support for monitoring the progress of the fetch request and it's even possible to cancel an asynchronous fetch request, for example, if the user decides that it's taking too long to complete.
The NSAsynchronousFetchRequest class leverages the NSProgress class for progress reporting as well as canceling an asynchronous fetch request. The NSProgress class, available since iOS 7 and OS X 10.9, is a clever way to monitor the progress of a task without the need to tightly couple the task to the user interface.
The NSProgress class also support cancelation, which is how an asynchronous fetch request can be canceled. Let's find out what we need to do to implement progress reporting for the asynchronous fetch request.
Step 1: Adding SVProgressHUD
We'll show the user the progress of the asynchronous fetch request using Sam Vermette's SVProgressHUD library. Download the library from GitHub and add the SVProgressHUD folder to your Xcode project.
Step 2: Setting Up NSProgress
In this article, we won't explore the NSProgress class in much detail, but feel free to read more about it in the documentation. We create an NSProgress instance in the block we hand to the performBlock: method in the view controller's viewDidLoad method.
// Create Progress
NSProgress *progress = [NSProgress progressWithTotalUnitCount:1];
// Become Current
[progress becomeCurrentWithPendingUnitCount:1];
You may be surprised that we set the total unit count to 1. The reason is simple. When Core Data executes the asynchronous fetch request, it doesn't know how many records it will find in the persistent store. This also means that we won't be able to show the relative progress to the user—a percentage. Instead, we will show the user the absolute progress—the number of records it has found.
You could remedy this issue by performing a fetch request to fetch the number of records before you execute the asynchronous fetch request. I prefer not to do this, though, because this also means that fetching the records from the persistent store takes longer to complete because of the extra fetch request at the start.
Step 3: Adding an Observer
When we execute the asynchronous fetch request, we are immediately handed an NSAsynchronousFetchResult object. This object has a progress property, which is of type NSProgress. It's this progress property that we need to observe if we want to receive progress updates.
Note that we call resignCurrent on the progress object to balance the earlier becomeCurrentWithPendingUnitCount: call. Keep in mind that both of these methods need to be invoked on the same thread.
Step 4: Removing the Observer
In the completion block of the asynchronous fetch request, we remove the observer and dismiss the progress HUD.
Before we implement observeValueForKeyPath:ofObject:change:context:, we need to add an import statement for the SVProgressHUD library, declare the static variable ProgressContext that we pass in as the context when adding and removing the observer, and show the progress HUD before creating the asynchronous fetch request.
All that's left for us to do, is implement the observeValueForKeyPath:ofObject:change:context: method. We check if context is equal to ProgressContext, create a status object by extracting the number of completed records from the change dictionary, and update the progress HUD. Note that we update the user interface on the main thread.
If we want to properly test our application, we need more data. While I don't recommend using the following approach in a production application, it's a quick and easy way to populate the database with data.
Open TSPAppDelegate.m and update the application:didFinishLaunchingWithOptions: method as shown below. The populateDatabase method is a simple helper method in which we add dummy data to the database.
The implementation is straightforward. Because we only want to insert dummy data once, we check the user defaults database for the key @"didPopulateDatabase". If the key isn't set, we insert dummy data.
- (void)populateDatabase {
// Helpers
NSUserDefaults *ud = [NSUserDefaults standardUserDefaults];
if ([ud objectForKey:@"didPopulateDatabase"]) return;
for (NSInteger i = 0; i < 1000000; i++) {
// Create Entity
NSEntityDescription *entity = [NSEntityDescription entityForName:@"TSPItem" inManagedObjectContext:self.managedObjectContext];
// Initialize Record
NSManagedObject *record = [[NSManagedObject alloc] initWithEntity:entity insertIntoManagedObjectContext:self.managedObjectContext];
// Populate Record
[record setValue:[NSString stringWithFormat:@"Item %li", (long)i] forKey:@"name"];
[record setValue:[NSDate date] forKey:@"createdAt"];
}
// Save Managed Object Context
[self saveManagedObjectContext];
// Update User Defaults
[ud setBool:YES forKey:@"didPopulateDatabase"];
}
The number of records is important. If you plan to run the application on the iOS Simulator, then it's fine to insert 100,000 or 1,000,000 records. This won't work as good on a physical device and will take too long to complete.
In the for loop, we create a managed object and populate it with data. Note that we don't save the changes of the managed object context during each iteration of the for loop.
Finally, we update the user defaults database to make sure the database isn't populated the next time the application is launched.
Great. Run the application in the iOS Simulator to see the result. You'll notice that it takes a few moments for the asynchronous fetch request to start fetching records and update the progress HUD.
6. Breaking Changes
By replacing the fetched results controller class with an asynchronous fetch request, we have broken a few pieces of the application. For example, tapping the checkmark of a to-do item doesn't seem to work any longer. While the database is being updated, the user interface doesn't reflect the change. The solution is fairly easy to fix and I'll leave it up to you to implement a solution. You should now have enough knowledge to understand the problem and find a suitable solution.
Conclusion
I'm sure you agree that asynchronous fetching is surprisingly easy to use. The heavy lifting is done by Core Data, which means that there's no need to manually merge the results of the asynchronous fetch request with the managed object context. Your only job is to update the user interface when the asynchronous fetch request hands you its results. Together with batch updates, it's a great addition to the Core Data framework.
This article also concludes this series on Core Data. You have learned a lot about the Core Data framework and you know all the essentials to use Core Data in a real application. Core Data is a powerful framework and, with the release of iOS 8, Apple has shown us that it gets better every year.
Even though you've learned the basic concepts of the Windows Phone platform, creating a modern Windows Phone application may still feel daunting. The truth is that we've only scratched the surface in this series, there is more to Windows Phone development than what we've covered in this series.
However, a good foundation is important and you are on the right track to create great Windows Phone applications. To help you continue your journey into the world of Windows Phone development, I have created a list of things that I recommend you do to continue your journey.
1. C# & XAML
C#
Since the recommended programming
language for creating Windows Phone applications is C#, it is only natural that
you need to become more familiar with the language. While a basic understanding of C#
was one of the requirements for this series, if you want to write more advanced applications, then you'll also need to learn more about C#. Concepts, such as delegates, BackgroundWorker, and WebClient, are definitely worth exploring.
XAML
We've worked a lot with XAML in this series and it's an essential aspect of an application's user interface. It's fairly easy to get up to speed with XAML, but I recommend that you also learn some of its more advanced concepts if you want to be able to create more advanced Windows Phone layouts.
The below links are a good starting point if you plan to learn more about C# and XAML. Check them out to see for yourself.
MVVM, short for Model View
ViewModel, is a design pattern that describes the process of decoupling a
Windows Phone application into three separate independent components, the View, the Model, and the ViewModel. It is a fairly advanced design pattern, but learning and
applying it will save you a lot of time and make your application much more
testable and reusable.
The MVVM design pattern alongside the Data Binding concept makes Windows Phone development wonderful and I therefore recommend that
you make yourself familiar with both concepts. get used to these techniques in the near future.
The below links are a good starting point if you plan to learn more about MVVM and Data Binding.
At CodePlex, a lot of open source projects for Windows Phone are
hosted. It's a great place to find open source libraries, such as parsers, toolkits, and other useful projects made available to the Windows Phone community. It’s
like the Bible for any Windows Phone developer. If you ever find yourself looking for a library that implements a trivial or common feature, then you may find one or more solutions on CodePlex.
4. Practice & Build
While the tips in this tutorial are great for learning more about Windows Phone development, it's important to put what you've learned into practice by creating applications. I encourage you to work on challenging projects that are out of your comfort zone. It may be frustrating at first, but it's a great way to
learn and improve your skills.
There are many resources available about Windows Phone that will help you overcome the hurdles you run into. A simple Google search will almost always do the trick.
Also, Microsoft runs various programs that aim to convince more developers to make Windows Phone applications. One program that stands out is DVLUP. On the DVLUP website, you can ideas for mobile applications and you also get rewards for completing any of the program's challenges.
Conclusion
The aim of this series was to teach you the basic concepts of Windows Phone development and to prepare you for more advanced Windows Phone application development. The
techniques you learned in this series are basic concepts that you must know to move forward and create more advanced applications.
By completing this series, you have become familiar with the Windows Phone platform and have created a solid foundation, which you can continue to build upon. It's time that you put your knowledge into practice and build something. It doesn't need to be great or perfect, build something that you improve over time as your knowledge grows and skills improve.
2014-10-08T16:45:58.000Z2014-10-08T16:45:58.000ZSani Yusuf
Even though you've learned the basic concepts of the Windows Phone platform, creating a modern Windows Phone application may still feel daunting. The truth is that we've only scratched the surface in this series, there is more to Windows Phone development than what we've covered in this series.
However, a good foundation is important and you are on the right track to create great Windows Phone applications. To help you continue your journey into the world of Windows Phone development, I have created a list of things that I recommend you do to continue your journey.
1. C# & XAML
C#
Since the recommended programming
language for creating Windows Phone applications is C#, it is only natural that
you need to become more familiar with the language. While a basic understanding of C#
was one of the requirements for this series, if you want to write more advanced applications, then you'll also need to learn more about C#. Concepts, such as delegates, BackgroundWorker, and WebClient, are definitely worth exploring.
XAML
We've worked a lot with XAML in this series and it's an essential aspect of an application's user interface. It's fairly easy to get up to speed with XAML, but I recommend that you also learn some of its more advanced concepts if you want to be able to create more advanced Windows Phone layouts.
The below links are a good starting point if you plan to learn more about C# and XAML. Check them out to see for yourself.
MVVM, short for Model View
ViewModel, is a design pattern that describes the process of decoupling a
Windows Phone application into three separate independent components, the View, the Model, and the ViewModel. It is a fairly advanced design pattern, but learning and
applying it will save you a lot of time and make your application much more
testable and reusable.
The MVVM design pattern alongside the Data Binding concept makes Windows Phone development wonderful and I therefore recommend that
you make yourself familiar with both concepts. get used to these techniques in the near future.
The below links are a good starting point if you plan to learn more about MVVM and Data Binding.
At CodePlex, a lot of open source projects for Windows Phone are
hosted. It's a great place to find open source libraries, such as parsers, toolkits, and other useful projects made available to the Windows Phone community. It’s
like the Bible for any Windows Phone developer. If you ever find yourself looking for a library that implements a trivial or common feature, then you may find one or more solutions on CodePlex.
4. Practice & Build
While the tips in this tutorial are great for learning more about Windows Phone development, it's important to put what you've learned into practice by creating applications. I encourage you to work on challenging projects that are out of your comfort zone. It may be frustrating at first, but it's a great way to
learn and improve your skills.
There are many resources available about Windows Phone that will help you overcome the hurdles you run into. A simple Google search will almost always do the trick.
Also, Microsoft runs various programs that aim to convince more developers to make Windows Phone applications. One program that stands out is DVLUP. On the DVLUP website, you can ideas for mobile applications and you also get rewards for completing any of the program's challenges.
Conclusion
The aim of this series was to teach you the basic concepts of Windows Phone development and to prepare you for more advanced Windows Phone application development. The
techniques you learned in this series are basic concepts that you must know to move forward and create more advanced applications.
By completing this series, you have become familiar with the Windows Phone platform and have created a solid foundation, which you can continue to build upon. It's time that you put your knowledge into practice and build something. It doesn't need to be great or perfect, build something that you improve over time as your knowledge grows and skills improve.
2014-10-08T16:45:58.000Z2014-10-08T16:45:58.000ZSani Yusuf
In the first part of this two-part series, we explored what Android Wear is, how it works, and delved into the new user interface the Android team developed specifically for Android Wear. I also shared some best practices to bear in mind when you're developing for the world of Android wearables.
In the second part of this series, you'll put your new Android Wear knowledge into practice by creating two sample apps that integrate with Android Wear in different ways.
The first app demonstrates the easiest way to start developing for Android Wear, take a regular handheld app and extend its notifications so they appear and function perfectly on a paired Android Wear device.
In the second sample, you'll create a full-screen wearable app by creating a Hello World project that consists of a handheld and a wearable component. After you've created this barebones project, you'll have everything in place to continue working and develop it into a full-blown wearable app.
This tutorial uses Android Studio. If you don't already have it installed, you can download the IDE from the official Android Developers website.
1. Download, Install & Update Your Software
Before you can develop anything for the wearable platform, you need to prepare your development environment by installing and updating all the packages you'll need, ensuring your Android Studio IDE is up to date.
To check you're running the latest version of Android Studio, launch the IDE, click Android Studio in the toolbar, and select Check for Updates. This tutorial requires Android Studio version 0.8 or higher, but ideally you should have the latest version installed, so you can benefit from the very latest features and fixes.
Next, open the Android SDK Manager and check you have the latest versions of the following three packages:
SDK Tools
Platform tools
Build tools
Download and install any available updates. Once these packages are up to date, two new packages will appear in the SDK Manager:
Android L Developer Preview
Android 4.4W (API 20)
Download and install both packages.
If you've just updated your SDK Tools, Platform Tools, and/or Build Tools packages, but still don't see the Android L and Android 4.4.W packages, then close and relaunch the SDK Manager. This should force the two new packages out of hiding.
If you haven't already installed the Android Support Library, do so now. You'll find it in the SDK Manager's Extras category.
2. Create a Wearable AVD
Regardless of whether you're building a wearable app or a handheld app that generates wearable-ready notifications, you'll need a way to test the wearable parts of your project. This is fairly straightforward thanks to the familiar AVD Manager, which has everything you need to emulate a wearable device.
Even if you own a physical Android Wear smartwatch, you'll need a way to test your project across the different Android Wear screens, so you still need to create at least one AVD. At the moment, this just means testing your project on a round and a square screen, but this list is likely to grow as more Android Wear devices are released.
To create an Android Wear AVD, launch the AVD Manager and click Create. Give your AVD a name, and enter the following settings:
Device:Select either Android Wear Round or Android Wear Square, depending on the screen you want to emulate.
Target:Choose Android L Preview.
Skin:Select either AndroidWearRound or AndroidWearSquare, depending on the type of screen you want to emulate.
Once you've created your AVD, launch it, and leave it running in the background.
Although you're now emulating an Android Wear device, what you aren’t emulating is the connection that exists between a physical wearable and a paired smartphone or tablet.
If you're going to accurately test your project's wearable components, you need to emulate this connection. This is where the Android Wear companion app comes in. It is available on Google Play.
3. Connect Your Handheld to the Emulator
Once you've installed the Android Wear companion app on your smartphone or tablet, this handheld device gains the ability to communicate with a wearable AVD in the same way a paired handheld device communicates with a physical wearable.
Step 1
On your handheld device, open the Google Play store and install the official Android Wear app.
Step 2
Enable USB debugging on your smartphone or tablet, and use the USB cable to connect your handheld device to your computer.
Step 3
Before your Android Wear AVD can communicate with your handheld, you need to open TCP port 5601 on your computer. Launch Terminal on OS X or the Command Prompt on Windows, and change the directory so it's pointing to your platform-tools folder:
cd Users/jessica/Downloads/adt-bundle/sdk/platform-tools
Note that the above command will vary depending on where the Android SDK is located on your development machine.
Step 4
Now the Terminal or Command Prompt is pointing at the correct location, open the necessary port by issuing an adb command as shown below.
./adb -d forward tcp:5601 tcp:5601
Step 5
On your handheld device, launch the Android Wear companion app. Tap the watch icon in the app's toolbar (highlighted in the below screenshot), and wait for Connected to appear in the toolbar.
Whenever you want to connect your handheld to a wearable AVD, you'll need to repeat this process. Save yourself some time by leaving the emulator running in the background and your smartphone or tablet plugged into your computer while you work your way through this tutorial.
Before you move onto the next step, it's worth taking some time to explore how a handheld and a wearable interact, particularly if this is your first hands-on experience with Android Wear.
When you connect a wearable AVD to a handheld device, the AVD automatically starts pulling notifications from the connected smartphone or tablet, and displays them as cards in its emulated Context Stream. A good way to familiarize yourself with Android Wear is to spend some time swiping through these personalized notification cards.
To perform a swiping action, use your mouse to drag the notification cards up and down. You can also view a notification card's action buttons, plus any additional pages, by swiping/dragging the card to the left.
You can explore additional notification cards, by sending a selection of demo cards to your AVD. To send a demo card, open the companion app and tap the three-dotted menu icon in its upper-right corner. Select Demo cards and choose a card from the list. The demo card will then appear in your AVD's Context Stream. Once a demo card arrives on the AVD, it functions exactly the same as a regular notification card.
4. Sample App 1: Wearable-Ready Notifications
Android Wear takes a proactive approach to pulling notifications from paired Android smartphones or tablets, and displaying them as cards in the Context Stream. However, if your app doesn't explicitly support Android Wear, there's no guarantee its notifications will display and function correctly on an Android Wear device.
To provide the best possible experience for any Android Wear users who may come into contact with your app, you need to create handheld notifications that can seamlessly extend to a paired wearable device, if the need arises. Over the next few sections, I'll show you how to create a sample handheld app that can trigger a wearable-ready notification.
Step 1: Project Setup
This sample app will live on the user's smartphone or tablet, so start by creating a basic Android project. Open Android Studio’s File menu and select New Project. Give your project a name and click Next. Select Phone and tablet, choose the minimum SDK your app will support, and click Next. Select Blank Activity and click Next. Give your activity a name. For the purposes of this tutorial, I’m using MyActivity. Click Finish to let Android Studio create the project.
Step 2: Update Gradle Build File
In order to create a wearable-ready notification, your project needs access to the Support Library. Open your project's build.gradle file and add the Support Library to the dependencies section as shown below.
At this point, Android Studio should prompt you to synchronize the changes you've made to the build.gradle file, so click the Sync now message when it appears. If the IDE doesn't prompt you, you should still synchronize your changes by opening Android Studio's File menu and clicking Synchronize.
Step 3: Create User Interface
This sample app will consist of a button that triggers the notification. To create this simple user interface, open the app > src > main > res > layout > activity_my.xml file, and enter the following:
This user interface references a string resource, so open the Values > strings.xml file and add the following to it:
<string name="notify">Notify Wearable</string>
Step 4: Create a Notification
You're now ready to create your wearable-ready notification. Open app > src > main > java > MyActivity and import the classes you'll use to create your app. The first few should already be familiar.
The next step is to test your project across the handheld and wearable platforms. If you haven't already, launch your wearable AVD and connect it to your handheld device before continuing.
Open the Run menu and select Run 'app'. In the Choose Device window, select your handheld device. After a few seconds, your app will appear on the connected smartphone or tablet.
To test that all-important notification, tap the app's Notify Wearable button. A notification will then appear in the smartphone or tablet's notification drawer. Open the notification drawer to check this part of the notification is displaying correctly.
On your Android Wear AVD, scroll through the notification cards until you find the card generated from your wearable-ready notification. Again, check this notification is displaying correctly. And you're done.
5. Sample App 2: Full-Screen Wearable App
Wearable Component
Although Google are encouraging developers to integrate their apps with Android Wear's Context Stream, it is possible to create full-screen apps for Android Wear devices.
Wearable full-screen apps actually consist of a handheld and a wearable component. The user installs the handheld app on their smartphone or tablet, and the system automatically pushes the wearable component to the paired Android Wear device. If you want to create a full-screen wearable app, you need to create a handheld app that contains a wearable component.
This may sound complicated, but you can create this kind of two-in-one project just by selecting the right options in Android Studio's project wizard:
In Android Studio, open the File menu and select New Project.
Give your project a name, and click Next.
Select Phone and tablet and Wear. You can choose which minimum SDK the Phone and tablet module supports, but the Wear module must support API 20. Click Next.
Select Blank Activity and click Next.
Give your activity a distinctive name so there's no chance of you confusing it with your project's wearable activity, for example, HandheldActivity. Click Next.
Select Blank Wear Activity and click Next.
Give the wearable activity a name that makes it impossible to confuse with the project's handheld activity.
Create your project by clicking Finish.
Exploring Hello World Projects
When you tell the project creation wizard to create Phone and tablet and Wear components, it creates two modules:
Mobile:Despite the name, this module can run on smartphones as well as tablets.
Wear: The Android system pushes this module to the paired wearable device.
If you open either module, you'll see Android Studio has already populated the module with a host of classes, directories, and resources.
Test Hello World Code
Android Studio not only automatically generates the layout for both modules, it also kits them out with some Hello World code. Although you'll replace this Hello World code, the process of testing a project that consists of handheld as well as wearable content remains the same. This is a good opportunity to learn how to test this kind of project.
Before you start, make sure your wearable AVD is up and running and that it's connected to your handheld device.
To test the project's handheld module:
Open the Run menu in the Android Studio toolbar and select Run....
In the popup that appears, select mobile.
When prompted, choose the handheld device that's currently connected to your computer. Your app's handheld component will then appear on your smartphone or tablet, ready for you to test.
To test the project's wearable component:
Open the Run menu in the Android Studio toolbar and select Run….
Select Wear from the popup that appears.
Select your wearable AVD.
Your app's wearable component will appear on your AVD.
Note, if your project doesn't appear automatically, you may need to swipe the screen several times to find it.
6. Troubleshooting
While it's normal to encounter the occasional bug or known issue when you're working on a software project, chances are you're going to run into a lot more problems when you're developing for Android Wear, simply because you're using an IDE that's still in beta to develop for an entirely new version of the Android operating system.
In this section, I share a workaround for a known issue, alongside some general tips and tricks to help you overcome any other problems you may run into.
At the time of writing, when you create a project with a wearable module or add wearable-ready code to a handheld project, you may encounter a known issue with the Gradle build file. This issue causes the Gradle build to fail with the following errors:
Could not find any version that matches com.google.android.support:wearable:+.
Could not find any version that matches com.google.android.gms:play-services-wearable:+.
The workaround involves adding a URL to the IDE's list of user defined sites. Launch the SDK Manager, then select Tools from the toolbar and click Manage Add-On Sites.
At this point you may encounter another issue, where the SDK Manager opens but its toolbar doesn't. If you have the SDK Manager selected, but its toolbar doesn't appear at the top of your screen, you need to minimize the SDK Manager and then select it once more. The toolbar should then appear and you can select Tools > Manage Add-On Sites.
If the error persists, check you have the latest version of the Google Play Services and Google Repository packages installed. If you've completed all these steps and are still seeing the Gradle errors, it's possible your IDE hasn't registered the changes you've made to the development environment. Closing and relaunching Android Studio should fix this.
If you encounter a different Gradle error message or you run into a completely different problem, here's some general fixes that can help get your project back on track:
Update Packages
If some of your Android SDK packages are out of date, it's possible you're encountering an issue that's already been addressed by an updated package. Boot up your SDK Manager and check for updates.
Relaunch
If you've made some changes to your Android SDK packages and are still encountering the same problem, try closing and relaunching your IDE so you know Android Studio has registered your changes.
Android Studio Version
Since Android Studio is in beta, it's particularly important you keep it up to date as most updates bring new fixes. To make sure you're running the most recent version of Android Studio, select Android Studio > Check for Updates….
Conclusion
You now have everything you need to start adding Android Wear support to your own handheld projects. If you've been following along with this tutorial and decide to create wearable-ready notifications, then your handheld Android device and AVD Manager are already prepped to test your wearable-ready code.
If you're eager to develop full-screen Android Wear apps instead, you already have the basic structure in place, so why not continue working on the Hello World sample app?
In the first part of this two-part series, we explored what Android Wear is, how it works, and delved into the new user interface the Android team developed specifically for Android Wear. I also shared some best practices to bear in mind when you're developing for the world of Android wearables.
In the second part of this series, you'll put your new Android Wear knowledge into practice by creating two sample apps that integrate with Android Wear in different ways.
The first app demonstrates the easiest way to start developing for Android Wear, take a regular handheld app and extend its notifications so they appear and function perfectly on a paired Android Wear device.
In the second sample, you'll create a full-screen wearable app by creating a Hello World project that consists of a handheld and a wearable component. After you've created this barebones project, you'll have everything in place to continue working and develop it into a full-blown wearable app.
This tutorial uses Android Studio. If you don't already have it installed, you can download the IDE from the official Android Developers website.
1. Download, Install & Update Your Software
Before you can develop anything for the wearable platform, you need to prepare your development environment by installing and updating all the packages you'll need, ensuring your Android Studio IDE is up to date.
To check you're running the latest version of Android Studio, launch the IDE, click Android Studio in the toolbar, and select Check for Updates. This tutorial requires Android Studio version 0.8 or higher, but ideally you should have the latest version installed, so you can benefit from the very latest features and fixes.
Next, open the Android SDK Manager and check you have the latest versions of the following three packages:
SDK Tools
Platform tools
Build tools
Download and install any available updates. Once these packages are up to date, two new packages will appear in the SDK Manager:
Android L Developer Preview
Android 4.4W (API 20)
Download and install both packages.
If you've just updated your SDK Tools, Platform Tools, and/or Build Tools packages, but still don't see the Android L and Android 4.4.W packages, then close and relaunch the SDK Manager. This should force the two new packages out of hiding.
If you haven't already installed the Android Support Library, do so now. You'll find it in the SDK Manager's Extras category.
2. Create a Wearable AVD
Regardless of whether you're building a wearable app or a handheld app that generates wearable-ready notifications, you'll need a way to test the wearable parts of your project. This is fairly straightforward thanks to the familiar AVD Manager, which has everything you need to emulate a wearable device.
Even if you own a physical Android Wear smartwatch, you'll need a way to test your project across the different Android Wear screens, so you still need to create at least one AVD. At the moment, this just means testing your project on a round and a square screen, but this list is likely to grow as more Android Wear devices are released.
To create an Android Wear AVD, launch the AVD Manager and click Create. Give your AVD a name, and enter the following settings:
Device:Select either Android Wear Round or Android Wear Square, depending on the screen you want to emulate.
Target:Choose Android L Preview.
Skin:Select either AndroidWearRound or AndroidWearSquare, depending on the type of screen you want to emulate.
Once you've created your AVD, launch it, and leave it running in the background.
Although you're now emulating an Android Wear device, what you aren’t emulating is the connection that exists between a physical wearable and a paired smartphone or tablet.
If you're going to accurately test your project's wearable components, you need to emulate this connection. This is where the Android Wear companion app comes in. It is available on Google Play.
3. Connect Your Handheld to the Emulator
Once you've installed the Android Wear companion app on your smartphone or tablet, this handheld device gains the ability to communicate with a wearable AVD in the same way a paired handheld device communicates with a physical wearable.
Step 1
On your handheld device, open the Google Play store and install the official Android Wear app.
Step 2
Enable USB debugging on your smartphone or tablet, and use the USB cable to connect your handheld device to your computer.
Step 3
Before your Android Wear AVD can communicate with your handheld, you need to open TCP port 5601 on your computer. Launch Terminal on OS X or the Command Prompt on Windows, and change the directory so it's pointing to your platform-tools folder:
cd Users/jessica/Downloads/adt-bundle/sdk/platform-tools
Note that the above command will vary depending on where the Android SDK is located on your development machine.
Step 4
Now the Terminal or Command Prompt is pointing at the correct location, open the necessary port by issuing an adb command as shown below.
./adb -d forward tcp:5601 tcp:5601
Step 5
On your handheld device, launch the Android Wear companion app. Tap the watch icon in the app's toolbar (highlighted in the below screenshot), and wait for Connected to appear in the toolbar.
Whenever you want to connect your handheld to a wearable AVD, you'll need to repeat this process. Save yourself some time by leaving the emulator running in the background and your smartphone or tablet plugged into your computer while you work your way through this tutorial.
Before you move onto the next step, it's worth taking some time to explore how a handheld and a wearable interact, particularly if this is your first hands-on experience with Android Wear.
When you connect a wearable AVD to a handheld device, the AVD automatically starts pulling notifications from the connected smartphone or tablet, and displays them as cards in its emulated Context Stream. A good way to familiarize yourself with Android Wear is to spend some time swiping through these personalized notification cards.
To perform a swiping action, use your mouse to drag the notification cards up and down. You can also view a notification card's action buttons, plus any additional pages, by swiping/dragging the card to the left.
You can explore additional notification cards, by sending a selection of demo cards to your AVD. To send a demo card, open the companion app and tap the three-dotted menu icon in its upper-right corner. Select Demo cards and choose a card from the list. The demo card will then appear in your AVD's Context Stream. Once a demo card arrives on the AVD, it functions exactly the same as a regular notification card.
4. Sample App 1: Wearable-Ready Notifications
Android Wear takes a proactive approach to pulling notifications from paired Android smartphones or tablets, and displaying them as cards in the Context Stream. However, if your app doesn't explicitly support Android Wear, there's no guarantee its notifications will display and function correctly on an Android Wear device.
To provide the best possible experience for any Android Wear users who may come into contact with your app, you need to create handheld notifications that can seamlessly extend to a paired wearable device, if the need arises. Over the next few sections, I'll show you how to create a sample handheld app that can trigger a wearable-ready notification.
Step 1: Project Setup
This sample app will live on the user's smartphone or tablet, so start by creating a basic Android project. Open Android Studio’s File menu and select New Project. Give your project a name and click Next. Select Phone and tablet, choose the minimum SDK your app will support, and click Next. Select Blank Activity and click Next. Give your activity a name. For the purposes of this tutorial, I’m using MyActivity. Click Finish to let Android Studio create the project.
Step 2: Update Gradle Build File
In order to create a wearable-ready notification, your project needs access to the Support Library. Open your project's build.gradle file and add the Support Library to the dependencies section as shown below.
At this point, Android Studio should prompt you to synchronize the changes you've made to the build.gradle file, so click the Sync now message when it appears. If the IDE doesn't prompt you, you should still synchronize your changes by opening Android Studio's File menu and clicking Synchronize.
Step 3: Create User Interface
This sample app will consist of a button that triggers the notification. To create this simple user interface, open the app > src > main > res > layout > activity_my.xml file, and enter the following:
This user interface references a string resource, so open the Values > strings.xml file and add the following to it:
<string name="notify">Notify Wearable</string>
Step 4: Create a Notification
You're now ready to create your wearable-ready notification. Open app > src > main > java > MyActivity and import the classes you'll use to create your app. The first few should already be familiar.
The next step is to test your project across the handheld and wearable platforms. If you haven't already, launch your wearable AVD and connect it to your handheld device before continuing.
Open the Run menu and select Run 'app'. In the Choose Device window, select your handheld device. After a few seconds, your app will appear on the connected smartphone or tablet.
To test that all-important notification, tap the app's Notify Wearable button. A notification will then appear in the smartphone or tablet's notification drawer. Open the notification drawer to check this part of the notification is displaying correctly.
On your Android Wear AVD, scroll through the notification cards until you find the card generated from your wearable-ready notification. Again, check this notification is displaying correctly. And you're done.
5. Sample App 2: Full-Screen Wearable App
Wearable Component
Although Google are encouraging developers to integrate their apps with Android Wear's Context Stream, it is possible to create full-screen apps for Android Wear devices.
Wearable full-screen apps actually consist of a handheld and a wearable component. The user installs the handheld app on their smartphone or tablet, and the system automatically pushes the wearable component to the paired Android Wear device. If you want to create a full-screen wearable app, you need to create a handheld app that contains a wearable component.
This may sound complicated, but you can create this kind of two-in-one project just by selecting the right options in Android Studio's project wizard:
In Android Studio, open the File menu and select New Project.
Give your project a name, and click Next.
Select Phone and tablet and Wear. You can choose which minimum SDK the Phone and tablet module supports, but the Wear module must support API 20. Click Next.
Select Blank Activity and click Next.
Give your activity a distinctive name so there's no chance of you confusing it with your project's wearable activity, for example, HandheldActivity. Click Next.
Select Blank Wear Activity and click Next.
Give the wearable activity a name that makes it impossible to confuse with the project's handheld activity.
Create your project by clicking Finish.
Exploring Hello World Projects
When you tell the project creation wizard to create Phone and tablet and Wear components, it creates two modules:
Mobile:Despite the name, this module can run on smartphones as well as tablets.
Wear: The Android system pushes this module to the paired wearable device.
If you open either module, you'll see Android Studio has already populated the module with a host of classes, directories, and resources.
Test Hello World Code
Android Studio not only automatically generates the layout for both modules, it also kits them out with some Hello World code. Although you'll replace this Hello World code, the process of testing a project that consists of handheld as well as wearable content remains the same. This is a good opportunity to learn how to test this kind of project.
Before you start, make sure your wearable AVD is up and running and that it's connected to your handheld device.
To test the project's handheld module:
Open the Run menu in the Android Studio toolbar and select Run....
In the popup that appears, select mobile.
When prompted, choose the handheld device that's currently connected to your computer. Your app's handheld component will then appear on your smartphone or tablet, ready for you to test.
To test the project's wearable component:
Open the Run menu in the Android Studio toolbar and select Run….
Select Wear from the popup that appears.
Select your wearable AVD.
Your app's wearable component will appear on your AVD.
Note, if your project doesn't appear automatically, you may need to swipe the screen several times to find it.
6. Troubleshooting
While it's normal to encounter the occasional bug or known issue when you're working on a software project, chances are you're going to run into a lot more problems when you're developing for Android Wear, simply because you're using an IDE that's still in beta to develop for an entirely new version of the Android operating system.
In this section, I share a workaround for a known issue, alongside some general tips and tricks to help you overcome any other problems you may run into.
At the time of writing, when you create a project with a wearable module or add wearable-ready code to a handheld project, you may encounter a known issue with the Gradle build file. This issue causes the Gradle build to fail with the following errors:
Could not find any version that matches com.google.android.support:wearable:+.
Could not find any version that matches com.google.android.gms:play-services-wearable:+.
The workaround involves adding a URL to the IDE's list of user defined sites. Launch the SDK Manager, then select Tools from the toolbar and click Manage Add-On Sites.
At this point you may encounter another issue, where the SDK Manager opens but its toolbar doesn't. If you have the SDK Manager selected, but its toolbar doesn't appear at the top of your screen, you need to minimize the SDK Manager and then select it once more. The toolbar should then appear and you can select Tools > Manage Add-On Sites.
If the error persists, check you have the latest version of the Google Play Services and Google Repository packages installed. If you've completed all these steps and are still seeing the Gradle errors, it's possible your IDE hasn't registered the changes you've made to the development environment. Closing and relaunching Android Studio should fix this.
If you encounter a different Gradle error message or you run into a completely different problem, here's some general fixes that can help get your project back on track:
Update Packages
If some of your Android SDK packages are out of date, it's possible you're encountering an issue that's already been addressed by an updated package. Boot up your SDK Manager and check for updates.
Relaunch
If you've made some changes to your Android SDK packages and are still encountering the same problem, try closing and relaunching your IDE so you know Android Studio has registered your changes.
Android Studio Version
Since Android Studio is in beta, it's particularly important you keep it up to date as most updates bring new fixes. To make sure you're running the most recent version of Android Studio, select Android Studio > Check for Updates….
Conclusion
You now have everything you need to start adding Android Wear support to your own handheld projects. If you've been following along with this tutorial and decide to create wearable-ready notifications, then your handheld Android device and AVD Manager are already prepped to test your wearable-ready code.
If you're eager to develop full-screen Android Wear apps instead, you already have the basic structure in place, so why not continue working on the Hello World sample app?
This tutorial will show you how to get started with Metal, a framework introduced in iOS 8 that supports GPU accelerated 3D graphics rendering and data parallel computation workloads. In this tutorial, we’ll take a look at the theoretical concepts that underly Metal. You'll also learn how to create a Metal application that sets the required hardware state for graphics, commits commands for execution in the GPU, and manages buffer, texture objects, and pre-compiled shaders.
1. First Things First
This tutorial assumes that you're familiar with the Objective-C language and have some experience with OpenGL, OpenCL, or a comparable graphics API.
It also requires a physical device with an Apple A7 or A8 processor. This means that you'll need an iPhone 5S, 6, or 6 Plus, or an iPad Air or mini (2nd generation). The iOS Simulator will give you compilation errors.
This tutorial is only focused on Metal and it won't cover the Metal Shading Language. We will create a shader, but we will only cover the basic operations to interact with it.
If you're using Xcode for the first time, then make sure that you add your Apple ID in the Accounts section of Xcode's Preferences. This will ensure that you don't run into problems when deploying an application onto your device.
Xcode 6 includes a project template for Metal, but to help you better understand Metal, we are going to create a project from scratch.
On a final note, we'll use Objective-C in this tutorial and it's important that you have a basic understanding of this programming language.
2. Introduction
For those of you who are familiar with OpenGL or OpenGL ES, Metal is a low-level 3D graphics framework, but with lower overhead. In contrast to Apple's Sprite Kit or Scene Kit frameworks with which you, by default, cannot interact with the rendering pipeline, with Metal you have absolute power to create, control, and modify that pipeline.
Metal has the following features:
The framework provides extremely low-overhead access to the A7 and A8 GPU, enabling incredibly high performance for sophisticated graphics rendering and computational tasks.
Metal eliminates many performance bottlenecks, such as costly state validation that is found in traditional graphics APIs.
It is explicitly designed to move all expensive state translation and compilation operations out of the runtime and rendering environment.
It provides precompiled shaders, state objects, and explicit command scheduling to ensure your application achieves the highest possible performance and efficiency for graphics rendering and computational tasks.
The framework was designed to exploit modern architectural considerations, such as multiprocessing and shared memory.
It is deeply integrated with iOS 8, the A7 and A8 chipsets, and the Apple hardware, creating a unified and independent framework.
Enough with the theory, it's time to understand how a Metal application is built.
3. Creating a Metal Application
A Metal application is characterized by a set of required steps to correctly present data on screen. These steps are usually created in order and some references are passed from one to another. These steps are:
get the device
create a command queue
create resources, such as buffers, textures, and shaders
create a rendering pipeline
create a view
Step 1: Get the Device
This step involves the creation of a MTLDevice object, the heart of a Metal application. The MTLDevice class provides a direct way to communicate with the GPU driver and hardware. To get a reference to a MTLDevice instance, you need to call the System Default Device as shown below. With this reference, you have direct access to the device's hardware.
id <MTLDevice> mtlDevice = MTLCreateSystemDefaultDevice();
Step 2: Create a Command Queue
The MTLCommandQueue class provides a way to submit commands or instructions to the GPU. To initialize an instance of the MTLCommandQueue class, you need to use the MTLDevice object we created earlier and call the newCommandQueue method on it.
id <MTLCommandQueue> mtlCommandQueue = [mtlDevice newCommandQueue];
Step 3: Create Resources
This step involves the creation of your buffer objects, textures, and other resources. In this tutorial, you will create vertices. These objects are stored on the server/GPU side and in order to communicate with them you need to create a specific data structure that must contain similar data to those available in the vertex object.
For instance, if you need to pass data for a 2D vertex position, you should declare one data structure containing an object for that 2D position. Then, you must declare it in both client, your iOS application, and server side, the Metal shader. Take a look at the following example for clarification.
Creating the rendering pipeline is probably the trickiest step, since you must take care of several initializations and configurations, each of which is illustrated in the following diagram.
The rendering pipeline is configured using two classes:
MTLRenderPipelineDescriptor: provides all of your rendering pipeline states, such as vertex positions, color, depth, and stencil buffers, among others
MTLRenderPipelineState: the compiled version of MTLRenderPipelineDescriptor and which will be deployed to the device
Note that you don't need to create all of the rendering pipeline objects. You should just create the ones that meet your needs.
The following code snippet shows you how to create the MTLRenderPipelineDescriptor object.
At this point, you've created the descriptor, but you still need to configure it with at least the pixel format. This is what we do in the following code block.
The newFunctionWithName method searches your Metal source file, looking for the SomeVertexMethodName method. The name of the shader itself is not important since the lookup is done directly through the method names. This means that you should define unique methods for unique shader operations. We'll look deeper into Metal shaders later on.
With the MTLRenderPipelineDescriptor object created and configured, the next step is to create and define the MTLRenderPipelineState by passing in the newly created MTLRenderPipelineDescriptor object.
To create a Metal view, you need to subclass UIView and override the layerClass method as shown below.
+(id)layerClass{
return [CAMetalLayer class];
}
In this tutorial, we'll look at another way to create a CAMetalLayer class that gives the developer more control over the layer's characteristics and configuration.
4. Drawing a Metal App
Now that we have initialized the necessary objects, we need to start drawing something onto the screen. Just like the initialization, you need to follow a number of steps:
get the command buffer
set a render pass
draw.
commit to the command buffer
Step 1: Get the Command Buffer
The initial step is to create an object that stores a serial list of commands for the device to execute. You create a MTLCommandBuffer object and add commands that will be executed sequentially by the GPU. The following code snippet shows how to create a command buffer. We use the MTLCommandQueue object we created earlier.
id <MTLCommandBuffer> mtlCommandBuffer = [mtlCommandQueue commandBuffer];
Step 2: Start a Render Pass
In Metal, the rendering configuration is complex and you need to explicitly state when the render pass begins and when it ends. You need to define the framebuffer configurations up front in order for iOS to configure the hardware properly for that specific configuration.
For those familiar with OpenGL and OpenGL ES, this step is similar since the framebuffer has the same properties, Color Attachment (0 to 3), Depth, and Stencil configurations. You can see a visual representation of this step in the diagram below.
You first need to create a texture to render to. The texture is created from the CAMetalDrawable class and uses the nextDrawable method to retrieve the next texture to draw in the list.
id <CAMetalDrawable> frameDrawable;
frameDrawable = [renderLayer nextDrawable];
This nextDrawable call can and would be your application bottleneck since it can easily block your application. The CPU and GPU may be desynchronized and the one must wait for the other, which can cause a block statement. There are synchronous mechanisms that can, and should always, be implemented to solve these issues, but I won't be covering these in this introductory tutorial.
Now that you have a texture to render to, you need to create an MTLRenderPassDescriptor object to store the framebuffer and texture information. Take a look at the following code snippet to see how this works.
The first step sets up the texture to draw. The second defines a specific action to take, in this case clearing the texture and preventing the content of that texture from being loaded into the GPU cache. The final step changes the background color to a specific color.
Step 3: Draw
With the framebuffer and the texture configured, it's time to create a MTLRenderCommandEncoder instance. The MTLRenderCommandEncoder class is responsible for the traditional interactions with the screen and can be seen as a container for a graphics rendering state. It also translates your code into a hardware-specific command format that will be executed by the device.
id <MTLRenderCommandEncoder> renderCommand = [mtlCommandBuffer renderCommandEncoderWithDescriptor: mtlRenderPassDescriptor];
// Set MTLRenderPipelineState
// Draw objects here
[renderCommand endEncoding];
Step 4: Commit to the Command Buffer
We now have a buffer and instructions waiting in memory. The next step is to commit the commands to the command buffer and see the graphics drawn onto the screen. Note that the GPU will only execute code that you specifically commit for the effect. The following lines of code let you schedule your framebuffer and commit the command buffer to the GPU.
At this point, you should have a general idea of how a Metal application is structured. However, to properly understand all this, you need to do it yourself. It's now time to code your first Metal application.
5. Creating a Metal Application
Start Xcode 6 and choose New > Project... from the File menu. Select Single View Application from the list of templates and choose a product name. Set Objective-C as the language and select iPhone from the Devices menu.
Open ViewController.m and add the following import statements at the top.
You also need to add the Metal and QuartzCore frameworks in the Linked Frameworks and Libraries section of the target's Build Phases. From now on, your attention should be targeted at the implementation file of the ViewController class.
6. Creating the Metal Structure
As I mentioned earlier, your first task is to set and initialize the core objects used across the whole application. In the following code snippet, we declare a number of instance variables. These should look familiar if you've read the first part of this tutorial.
@implementation ViewController
{
id <MTLDevice> mtlDevice;
id <MTLCommandQueue> mtlCommandQueue;
MTLRenderPassDescriptor *mtlRenderPassDescriptor;
CAMetalLayer *metalLayer;
id <CAMetalDrawable> frameDrawable;
CADisplayLink *displayLink;
}
In the view controller's viewDidLoad method, we initialize the MTLDevice and CommandQueue instances.
You can now interact with your device and create command queues. It's now time to configure the CAMetalLayer object. Your CAMetalLayer layer should have a specific configuration depending on the device, pixel format, and frame size. You should also specify that it'll be using only the framebuffer and that it should be added to the current layer.
If you have a problem configuring the CAMetalLayer object, then the following code snippet will help you with this.
The only step left is to render something to the screen. Initialize the CADisplayLink, passing in self as the target and @selector(renderScene) as the selector. Finally, add the CADisplayLink object to the current run loop.
If you build the project, you'll notice that Xcode gives us one warning. We still need to implement the renderScene method.
The renderScene method is executed every frame. There are several objects that need to be initialized for each new frame, such as the MTLCommandBuffer and MTLRenderCommandEncoder objects.
The steps we need to take to render a frame are:
create a MTLCommandBuffer object
initialize a CAMetalDrawable object
initialize a MTLRenderPassDescriptor object
configure the texture, loadAction, clearColor, and storeAction properties of the MTLRenderPassDescriptor object
create a new MTLRenderCommandEncoder object
present the drawable and commit the command buffer
Feel free to revisit what we've seen so far to solve this challenge on your own. If you want to continue with this tutorial, then take a look at the solution shown below.
We also need to implement the view controller's dealloc method in which we invalidate the displayLink object. We set the mtlDevice and mtlCommandQueue objects to nil.
You now have a very basic Metal application. It's time to add your first graphical primitive, a triangle. The first step is to create a structure for the triangle.
typedef struct {
GLKVector2 position;
}Triangle;
Don't forget to add an import statement for the GLKMath library at the top of ViewController.m.
#import <GLKit/GLKMath.h>
To render the triangle, you need to create a MTLRenderPipelineDescriptor object and a MTLRenderPipelineState object. In addition, every object that's drawn onto the screen belongs to the MTLBuffer class.
MTLRenderPipelineDescriptor *renderPipelineDescriptor;
id <MTLRenderPipelineState> renderPipelineState;
id <MTLBuffer> object;
With these instance variables declared, you should now initialize them in the viewDidLoad method as I explained earlier.
To shade the triangle, we'll need Metal shaders. The Metal shaders should be assigned to the MTLRenderPipelineDescriptor object and encapsulated through a MTLLibrary protocol. It may sound complex, but you only need to use the following lines of code:
The first line creates an object that conforms to the MTLLibrary protocol. In the second line, we tell the library which method needs to be invoked inside the shader to operate the vertex pass inside the rendering pipeline. In the third line, we repeat this step at the pixel level, the fragments. Finally, in the last line we create a MTLRenderPipelineState object.
In Metal, you can define the system coordinates, but in this tutorial you will use the default coordinate system, that is, the coordinates of the screen's center are (0,0).
In the following code block, we create a triangle object with three coordinates, (-.5f, 0.0f), (0.5f, 0.0f), (0.0f, 0.5f).
The viewDidLoad method is complete, but there's one last step missing, creating the shaders.
8. Creating Shaders
To create a Metal shader, select New > File... from the File menu, choose Source>Metal File from the iOS section, and name it MyShader. Xcode will then create a new file for you, MyShader.metal.
At the top, you should see the following two lines of code. The first one includes the Metal Standard Library while the second one uses the metal namespace.
#include <metal_stdlib>
using namespace metal;
The first step is to copy the triangle structure to the shader. Shaders are usually divided into two different operations, vertex and pixel (fragments). The first is related to the position of the vertex while the second is related to the final color of that vertex and all positions of pixels inside the polygon. You can look at it this way, the first will rasterize the polygon, the pixels of the polygon, and the second will shade those same pixels.
Since they need to communicate in a unidirectional way, from vertex to fragment, it's best to create a structure for the data that will be passed. In this case, we only pass the position.
typedef struct {
float4 position [[position]];
} TriangleOutput;
Now, let's create the vertex and fragment methods. Remember when you programmed the RenderPipelineDescriptor object for both vertex and fragment? You used the newFunctionWithName method, passing in an NSString object. That string is the name of the method that you call inside the shader. This means that you need to declare two methods with those names, VertexColor and FragmentColor.
What does this mean? You can create your shaders and name them as you like, but you need to call the methods exactly as you declare them and they should have unique names.
Inside your shaders, add the following code block.
The VertexColor method will receive data stored in position 0 of the buffer (memory allocated) and the vertex_id of the vertex. Since we declared a three-vertex triangle, the vertex_id will be 0, 1, and 2. It outputs a TriangleOutput object that's automatically received by the FragmentColor. Finally, it will shade each pixel inside those three vertices using a red color.
That's it. Build and run your application and enjoy your first, brand new 60fps Metal application.
9. External Resources
If you want to know more about the Metal framework and how it works, you can check out several other resources:
This tutorial will show you how to get started with Metal, a framework introduced in iOS 8 that supports GPU accelerated 3D graphics rendering and data parallel computation workloads. In this tutorial, we’ll take a look at the theoretical concepts that underly Metal. You'll also learn how to create a Metal application that sets the required hardware state for graphics, commits commands for execution in the GPU, and manages buffer, texture objects, and pre-compiled shaders.
1. First Things First
This tutorial assumes that you're familiar with the Objective-C language and have some experience with OpenGL, OpenCL, or a comparable graphics API.
It also requires a physical device with an Apple A7 or A8 processor. This means that you'll need an iPhone 5S, 6, or 6 Plus, or an iPad Air or mini (2nd generation). The iOS Simulator will give you compilation errors.
This tutorial is only focused on Metal and it won't cover the Metal Shading Language. We will create a shader, but we will only cover the basic operations to interact with it.
If you're using Xcode for the first time, then make sure that you add your Apple ID in the Accounts section of Xcode's Preferences. This will ensure that you don't run into problems when deploying an application onto your device.
Xcode 6 includes a project template for Metal, but to help you better understand Metal, we are going to create a project from scratch.
On a final note, we'll use Objective-C in this tutorial and it's important that you have a basic understanding of this programming language.
2. Introduction
For those of you who are familiar with OpenGL or OpenGL ES, Metal is a low-level 3D graphics framework, but with lower overhead. In contrast to Apple's Sprite Kit or Scene Kit frameworks with which you, by default, cannot interact with the rendering pipeline, with Metal you have absolute power to create, control, and modify that pipeline.
Metal has the following features:
The framework provides extremely low-overhead access to the A7 and A8 GPU, enabling incredibly high performance for sophisticated graphics rendering and computational tasks.
Metal eliminates many performance bottlenecks, such as costly state validation that is found in traditional graphics APIs.
It is explicitly designed to move all expensive state translation and compilation operations out of the runtime and rendering environment.
It provides precompiled shaders, state objects, and explicit command scheduling to ensure your application achieves the highest possible performance and efficiency for graphics rendering and computational tasks.
The framework was designed to exploit modern architectural considerations, such as multiprocessing and shared memory.
It is deeply integrated with iOS 8, the A7 and A8 chipsets, and the Apple hardware, creating a unified and independent framework.
Enough with the theory, it's time to understand how a Metal application is built.
3. Creating a Metal Application
A Metal application is characterized by a set of required steps to correctly present data on screen. These steps are usually created in order and some references are passed from one to another. These steps are:
get the device
create a command queue
create resources, such as buffers, textures, and shaders
create a rendering pipeline
create a view
Step 1: Get the Device
This step involves the creation of a MTLDevice object, the heart of a Metal application. The MTLDevice class provides a direct way to communicate with the GPU driver and hardware. To get a reference to a MTLDevice instance, you need to call the System Default Device as shown below. With this reference, you have direct access to the device's hardware.
id <MTLDevice> mtlDevice = MTLCreateSystemDefaultDevice();
Step 2: Create a Command Queue
The MTLCommandQueue class provides a way to submit commands or instructions to the GPU. To initialize an instance of the MTLCommandQueue class, you need to use the MTLDevice object we created earlier and call the newCommandQueue method on it.
id <MTLCommandQueue> mtlCommandQueue = [mtlDevice newCommandQueue];
Step 3: Create Resources
This step involves the creation of your buffer objects, textures, and other resources. In this tutorial, you will create vertices. These objects are stored on the server/GPU side and in order to communicate with them you need to create a specific data structure that must contain similar data to those available in the vertex object.
For instance, if you need to pass data for a 2D vertex position, you should declare one data structure containing an object for that 2D position. Then, you must declare it in both client, your iOS application, and server side, the Metal shader. Take a look at the following example for clarification.
Creating the rendering pipeline is probably the trickiest step, since you must take care of several initializations and configurations, each of which is illustrated in the following diagram.
The rendering pipeline is configured using two classes:
MTLRenderPipelineDescriptor: provides all of your rendering pipeline states, such as vertex positions, color, depth, and stencil buffers, among others
MTLRenderPipelineState: the compiled version of MTLRenderPipelineDescriptor and which will be deployed to the device
Note that you don't need to create all of the rendering pipeline objects. You should just create the ones that meet your needs.
The following code snippet shows you how to create the MTLRenderPipelineDescriptor object.
At this point, you've created the descriptor, but you still need to configure it with at least the pixel format. This is what we do in the following code block.
The newFunctionWithName method searches your Metal source file, looking for the SomeVertexMethodName method. The name of the shader itself is not important since the lookup is done directly through the method names. This means that you should define unique methods for unique shader operations. We'll look deeper into Metal shaders later on.
With the MTLRenderPipelineDescriptor object created and configured, the next step is to create and define the MTLRenderPipelineState by passing in the newly created MTLRenderPipelineDescriptor object.
To create a Metal view, you need to subclass UIView and override the layerClass method as shown below.
+(id)layerClass{
return [CAMetalLayer class];
}
In this tutorial, we'll look at another way to create a CAMetalLayer class that gives the developer more control over the layer's characteristics and configuration.
4. Drawing a Metal App
Now that we have initialized the necessary objects, we need to start drawing something onto the screen. Just like the initialization, you need to follow a number of steps:
get the command buffer
set a render pass
draw.
commit to the command buffer
Step 1: Get the Command Buffer
The initial step is to create an object that stores a serial list of commands for the device to execute. You create a MTLCommandBuffer object and add commands that will be executed sequentially by the GPU. The following code snippet shows how to create a command buffer. We use the MTLCommandQueue object we created earlier.
id <MTLCommandBuffer> mtlCommandBuffer = [mtlCommandQueue commandBuffer];
Step 2: Start a Render Pass
In Metal, the rendering configuration is complex and you need to explicitly state when the render pass begins and when it ends. You need to define the framebuffer configurations up front in order for iOS to configure the hardware properly for that specific configuration.
For those familiar with OpenGL and OpenGL ES, this step is similar since the framebuffer has the same properties, Color Attachment (0 to 3), Depth, and Stencil configurations. You can see a visual representation of this step in the diagram below.
You first need to create a texture to render to. The texture is created from the CAMetalDrawable class and uses the nextDrawable method to retrieve the next texture to draw in the list.
id <CAMetalDrawable> frameDrawable;
frameDrawable = [renderLayer nextDrawable];
This nextDrawable call can and would be your application bottleneck since it can easily block your application. The CPU and GPU may be desynchronized and the one must wait for the other, which can cause a block statement. There are synchronous mechanisms that can, and should always, be implemented to solve these issues, but I won't be covering these in this introductory tutorial.
Now that you have a texture to render to, you need to create an MTLRenderPassDescriptor object to store the framebuffer and texture information. Take a look at the following code snippet to see how this works.
The first step sets up the texture to draw. The second defines a specific action to take, in this case clearing the texture and preventing the content of that texture from being loaded into the GPU cache. The final step changes the background color to a specific color.
Step 3: Draw
With the framebuffer and the texture configured, it's time to create a MTLRenderCommandEncoder instance. The MTLRenderCommandEncoder class is responsible for the traditional interactions with the screen and can be seen as a container for a graphics rendering state. It also translates your code into a hardware-specific command format that will be executed by the device.
id <MTLRenderCommandEncoder> renderCommand = [mtlCommandBuffer renderCommandEncoderWithDescriptor: mtlRenderPassDescriptor];
// Set MTLRenderPipelineState
// Draw objects here
[renderCommand endEncoding];
Step 4: Commit to the Command Buffer
We now have a buffer and instructions waiting in memory. The next step is to commit the commands to the command buffer and see the graphics drawn onto the screen. Note that the GPU will only execute code that you specifically commit for the effect. The following lines of code let you schedule your framebuffer and commit the command buffer to the GPU.
At this point, you should have a general idea of how a Metal application is structured. However, to properly understand all this, you need to do it yourself. It's now time to code your first Metal application.
5. Creating a Metal Application
Start Xcode 6 and choose New > Project... from the File menu. Select Single View Application from the list of templates and choose a product name. Set Objective-C as the language and select iPhone from the Devices menu.
Open ViewController.m and add the following import statements at the top.
You also need to add the Metal and QuartzCore frameworks in the Linked Frameworks and Libraries section of the target's Build Phases. From now on, your attention should be targeted at the implementation file of the ViewController class.
6. Creating the Metal Structure
As I mentioned earlier, your first task is to set and initialize the core objects used across the whole application. In the following code snippet, we declare a number of instance variables. These should look familiar if you've read the first part of this tutorial.
@implementation ViewController
{
id <MTLDevice> mtlDevice;
id <MTLCommandQueue> mtlCommandQueue;
MTLRenderPassDescriptor *mtlRenderPassDescriptor;
CAMetalLayer *metalLayer;
id <CAMetalDrawable> frameDrawable;
CADisplayLink *displayLink;
}
In the view controller's viewDidLoad method, we initialize the MTLDevice and CommandQueue instances.
You can now interact with your device and create command queues. It's now time to configure the CAMetalLayer object. Your CAMetalLayer layer should have a specific configuration depending on the device, pixel format, and frame size. You should also specify that it'll be using only the framebuffer and that it should be added to the current layer.
If you have a problem configuring the CAMetalLayer object, then the following code snippet will help you with this.
The only step left is to render something to the screen. Initialize the CADisplayLink, passing in self as the target and @selector(renderScene) as the selector. Finally, add the CADisplayLink object to the current run loop.
If you build the project, you'll notice that Xcode gives us one warning. We still need to implement the renderScene method.
The renderScene method is executed every frame. There are several objects that need to be initialized for each new frame, such as the MTLCommandBuffer and MTLRenderCommandEncoder objects.
The steps we need to take to render a frame are:
create a MTLCommandBuffer object
initialize a CAMetalDrawable object
initialize a MTLRenderPassDescriptor object
configure the texture, loadAction, clearColor, and storeAction properties of the MTLRenderPassDescriptor object
create a new MTLRenderCommandEncoder object
present the drawable and commit the command buffer
Feel free to revisit what we've seen so far to solve this challenge on your own. If you want to continue with this tutorial, then take a look at the solution shown below.
We also need to implement the view controller's dealloc method in which we invalidate the displayLink object. We set the mtlDevice and mtlCommandQueue objects to nil.
You now have a very basic Metal application. It's time to add your first graphical primitive, a triangle. The first step is to create a structure for the triangle.
typedef struct {
GLKVector2 position;
}Triangle;
Don't forget to add an import statement for the GLKMath library at the top of ViewController.m.
#import <GLKit/GLKMath.h>
To render the triangle, you need to create a MTLRenderPipelineDescriptor object and a MTLRenderPipelineState object. In addition, every object that's drawn onto the screen belongs to the MTLBuffer class.
MTLRenderPipelineDescriptor *renderPipelineDescriptor;
id <MTLRenderPipelineState> renderPipelineState;
id <MTLBuffer> object;
With these instance variables declared, you should now initialize them in the viewDidLoad method as I explained earlier.
To shade the triangle, we'll need Metal shaders. The Metal shaders should be assigned to the MTLRenderPipelineDescriptor object and encapsulated through a MTLLibrary protocol. It may sound complex, but you only need to use the following lines of code:
The first line creates an object that conforms to the MTLLibrary protocol. In the second line, we tell the library which method needs to be invoked inside the shader to operate the vertex pass inside the rendering pipeline. In the third line, we repeat this step at the pixel level, the fragments. Finally, in the last line we create a MTLRenderPipelineState object.
In Metal, you can define the system coordinates, but in this tutorial you will use the default coordinate system, that is, the coordinates of the screen's center are (0,0).
In the following code block, we create a triangle object with three coordinates, (-.5f, 0.0f), (0.5f, 0.0f), (0.0f, 0.5f).
The viewDidLoad method is complete, but there's one last step missing, creating the shaders.
8. Creating Shaders
To create a Metal shader, select New > File... from the File menu, choose Source>Metal File from the iOS section, and name it MyShader. Xcode will then create a new file for you, MyShader.metal.
At the top, you should see the following two lines of code. The first one includes the Metal Standard Library while the second one uses the metal namespace.
#include <metal_stdlib>
using namespace metal;
The first step is to copy the triangle structure to the shader. Shaders are usually divided into two different operations, vertex and pixel (fragments). The first is related to the position of the vertex while the second is related to the final color of that vertex and all positions of pixels inside the polygon. You can look at it this way, the first will rasterize the polygon, the pixels of the polygon, and the second will shade those same pixels.
Since they need to communicate in a unidirectional way, from vertex to fragment, it's best to create a structure for the data that will be passed. In this case, we only pass the position.
typedef struct {
float4 position [[position]];
} TriangleOutput;
Now, let's create the vertex and fragment methods. Remember when you programmed the RenderPipelineDescriptor object for both vertex and fragment? You used the newFunctionWithName method, passing in an NSString object. That string is the name of the method that you call inside the shader. This means that you need to declare two methods with those names, VertexColor and FragmentColor.
What does this mean? You can create your shaders and name them as you like, but you need to call the methods exactly as you declare them and they should have unique names.
Inside your shaders, add the following code block.
The VertexColor method will receive data stored in position 0 of the buffer (memory allocated) and the vertex_id of the vertex. Since we declared a three-vertex triangle, the vertex_id will be 0, 1, and 2. It outputs a TriangleOutput object that's automatically received by the FragmentColor. Finally, it will shade each pixel inside those three vertices using a red color.
That's it. Build and run your application and enjoy your first, brand new 60fps Metal application.
9. External Resources
If you want to know more about the Metal framework and how it works, you can check out several other resources:
History has shown that the majority of us tend to be a little slow to adapt our skills when designing and developing for new platforms. Instead, we frequently find ourselves trying to transfer the same rules from a predecessor, instead of creating anew. This is best illustrated in Don Norman's book, Design of Everyday Things, where he uses the example of the first automobiles and how we made them to look like horse-drawn carriages, appropriately named "horseless carriages".
The same applies to today's products. After all, it's only been in recent years that we have developed suitable design principles for mobile devices such as engaging with the user via cards, streams, and notifications.
In this article, we'll briefly introduce two concepts we should keep in mind when designing for smartwatches. I have avoided the term "smartwatch apps" as it's better to think of the smartwatch as a peripheral or extension of a personal ecosystem or personal area network rather than an isolated device.
1. Wearables
Before introducing the concepts, let's introduce wearables and what makes them a unique platform. Wearables are miniature electronic devices that are worn by the user under, with, or on top of clothing. What makes them unique is their inherent attributes which include:
always on
always connected, user and location aware
always accessible
inherently part of personal ecosystem
ability to augment the user's actions
The ambition of wearables is to enable users to take real-world actions by providing relevant, contextual information precisely at the point of decision making. Achieving this means interpreting data in real-time and intelligently pushing it to the most appropriate device(s) in accordance to the user's current context, that is, providing just-in-time interactions/information. But with this new opportunity come new complexities, achieving simplicity pushes the complexities onto the designer and developer.
For the user, some benefits of using wearables include:
ability to record the world around us
nudge us into action
communicate information easily/seamlessly between one another
allow us to control our environments
reflect our well-being back to us to help us manage it better
Opportunities to the user and others include:
better and more accurate understanding of the user and their current context (hyper-contextual targeting)
potential to reduce noise and better integrate into the user's life
augment reality without disrupting the user's flow
creation of new products and services, for example, Fitbit
In essence, wearables provide an opportunity for more intimate, timely, and relevant experiences. Two principles that help achieve this are signals and microinteractions. Let's take a look at each principle.
2. Signals
Time sensitivity, and therefore accuracy—through ease of information digesting—becomes important with wearables. This means it's important to create timely, relevant, and glanceable information, known as signals.
The information displayed should be curated to precisely fit the immediate situation or task, with no extraneous data. Do not design a wearable experience for a function that's more effectively done on a smartphone, a tablet, or a piece of paper.
Successful designing for wearables is aimed at recognition, not reading. To make content timely and relevant you should spend the majority of your time thinking about what people want to know, in sport or elsewhere, at any given moment. The more you know about what information people need and currently don't have, the more compelling your design will be.
3. Microinteractions
In Dan Saffer's book, Microinteractions: Designing with Details, he describes microinteractions as contained product moments, which revolve around a single use case—they have one main task. It's useful to use his Microinteraction Model (Trigger > Rules > Feedback > Loops) when designing for wearables, especially smartwatches. He describes each phase of this model as follows:
Trigger: a user or system action that initiates the microinteraction
Rules: determine the flow of the interaction
Feedback: communicates the rules to the user
Loops: determine how long the interaction goes on for
The limited output and input capabilities make longer interactions less comfortable. It's therefore important to make them as short as possible hence the prefix "micro". This is also why it's so important to leverage the user's context to speed up/improve efficiency and relevance by providing actionable content and possibly automating some of the tasks.
4. Building Smartwatch Experiences
To further explain design principles, I'll walk you through implementing a simple application that considers both. In doing so we'll get to explore Android Wear, Google's framework for building wearable products.
Information That Moves With You
It seemed that the majority of the industry was focused on creating platforms to host apps for your wrist. This was until Google introduced Android Wear, its answer to Wearables, an extension to the Android platform designed specifically for delivering small chunks of information and facilitating quick and minimal interactions.
Essentially what looks like an extension of Google Now, that is, displaying contextual notifications about things such as traffic, weather alerts, incoming messages, sport scores, and travel updates, delivered to the user on Cards. This works intuitively, rather than creating another isolated system.
What Can Be Developed on Android Wear?
As described in Android Design for Android Wear, the two core functions of Android Wear is Suggest and Demand. Suggest, the most exciting, is nothing more than an extended notification that is delivered to the device, either locally or remotely from the connected handheld.
What is exciting about this is that it forces a shift in thinking of how notifications are used and how we engage with the user. Just like design patterns in software development, the Android Wear design principle encourages the implementation of contextual aware experiences, that is, trying to anticipate the user's needs.
Demand is for cases when Android Wear is unable to anticipate the user's needs and allows the user to initiate a task, relying heavily on voice for user interaction. It's important and emphasized throughout the documentation that the use cases and ergonomics of Android Wear devices differ from handhelds. It needs to work within the constraints of the wearable context rather than trying to squeeze your handheld design thinking onto an Android Wear device.
Google Now has helped set a standard for what can be done with contextual notifications. The following list contain some examples:
activity summary, such as running, biking, walking
nearby events & attractions
nearby offers
relevant and important events and news, such as weather & traffic reports
ticketing, such as boarding passes, coupons, tickets
reminders based on your calendar and current context
behavioral nudges, such as encouraging to stretch when the user has been inactive
sport and stock updates
notes
As mentioned above, Suggest encourages thinking of how to anticipate to the user's needs. To achieve this, you'll more than likely employ an intelligent software agent architecture, an autonomous service that monitors the user's context to proactively perform a task. Some examples of this include:
public transport delay alerts and providing the ability to find an alternative route
nudge the user into a healthier lifestyle by suggesting alternative modes of transport and/or routes to work (bike versus car)
monitor for sales on tagged items and provide the ability to purchase when the price drops
stock alerts with the ability to buy/sell
proactive watching, for example, Twitter and alerting you of possible job opportunities with the ability to flag and review on a more appropriate device
What's notable about the above use cases is that each are actionable and you can envision that they can be provoked using a microinteraction, offloading any heavy lifting to the appropriate device.
5. Building for Android Wear
With Android Wear being an extension of Android means that the majority of APIs available on Android are also available on Android Wear. Of course, there are a number of APIs that aren't available on Android Wear. You can read more about these in the Creating Wearable Apps documentation.
There are also a number of extensions to the platform to better cater to wearable devices. The following sections briefly outline these additions.
Notifications
The easiest way to extend your application and take advantage of Android Wear is through notifications, which are by default automatically delivered to wearables that are paired with a handheld. Because of their inherent constraints, notifications provide an ideal vehicle for engaging the user via an Android Wear device.
Notifications are delivered to a Context Stream, in which the user can quickly scan each notification and engage with those that interest them. Similar to Google Now, information is delivered on Cards to which actions can be attached to make the information actionable.
Apps
Although I have refrained from using the word "app" in this article, with Android Wear it is possible to create Activities and Services. There will be times when this makes sense, for example, when you need to monitor the user's heart rate in the background. Launching a custom Activity or Service is done with Intents with the addition of using voice.
Communicating With a Paired Handheld
Two approaches were introduced to handle communication between an Android Wear device and a paired handheld, synchronizing data items and the MessageApi API.
Data items provide storage and synchronization. The listening device will be notified of any changes. An example of this could be syncing the user's heart rate from the wearable with a paired handheld.
The MessageApi API is a way of sending, non-guaranteed, signals to the paired handheld, for example, sending volume commands to Android TV.
Conclusion
Android Wear provides a flexible framework, giving you the power to dictate the experience. However, it's important that you build appropriate experiences that enrich the user's life with minimal distraction rather than rich complex experiences. Think of Android Wear as lifestyle accessories rather than computing devices.
In the next article, we'll build a simple Android Wear experience to capture the essence of what we've discussed in this article.
History has shown that the majority of us tend to be a little slow to adapt our skills when designing and developing for new platforms. Instead, we frequently find ourselves trying to transfer the same rules from a predecessor, instead of creating anew. This is best illustrated in Don Norman's book, Design of Everyday Things, where he uses the example of the first automobiles and how we made them to look like horse-drawn carriages, appropriately named "horseless carriages".
The same applies to today's products. After all, it's only been in recent years that we have developed suitable design principles for mobile devices such as engaging with the user via cards, streams, and notifications.
In this article, we'll briefly introduce two concepts we should keep in mind when designing for smartwatches. I have avoided the term "smartwatch apps" as it's better to think of the smartwatch as a peripheral or extension of a personal ecosystem or personal area network rather than an isolated device.
1. Wearables
Before introducing the concepts, let's introduce wearables and what makes them a unique platform. Wearables are miniature electronic devices that are worn by the user under, with, or on top of clothing. What makes them unique is their inherent attributes which include:
always on
always connected, user and location aware
always accessible
inherently part of personal ecosystem
ability to augment the user's actions
The ambition of wearables is to enable users to take real-world actions by providing relevant, contextual information precisely at the point of decision making. Achieving this means interpreting data in real-time and intelligently pushing it to the most appropriate device(s) in accordance to the user's current context, that is, providing just-in-time interactions/information. But with this new opportunity come new complexities, achieving simplicity pushes the complexities onto the designer and developer.
For the user, some benefits of using wearables include:
ability to record the world around us
nudge us into action
communicate information easily/seamlessly between one another
allow us to control our environments
reflect our well-being back to us to help us manage it better
Opportunities to the user and others include:
better and more accurate understanding of the user and their current context (hyper-contextual targeting)
potential to reduce noise and better integrate into the user's life
augment reality without disrupting the user's flow
creation of new products and services, for example, Fitbit
In essence, wearables provide an opportunity for more intimate, timely, and relevant experiences. Two principles that help achieve this are signals and microinteractions. Let's take a look at each principle.
2. Signals
Time sensitivity, and therefore accuracy—through ease of information digesting—becomes important with wearables. This means it's important to create timely, relevant, and glanceable information, known as signals.
The information displayed should be curated to precisely fit the immediate situation or task, with no extraneous data. Do not design a wearable experience for a function that's more effectively done on a smartphone, a tablet, or a piece of paper.
Successful designing for wearables is aimed at recognition, not reading. To make content timely and relevant you should spend the majority of your time thinking about what people want to know, in sport or elsewhere, at any given moment. The more you know about what information people need and currently don't have, the more compelling your design will be.
3. Microinteractions
In Dan Saffer's book, Microinteractions: Designing with Details, he describes microinteractions as contained product moments, which revolve around a single use case—they have one main task. It's useful to use his Microinteraction Model (Trigger > Rules > Feedback > Loops) when designing for wearables, especially smartwatches. He describes each phase of this model as follows:
Trigger: a user or system action that initiates the microinteraction
Rules: determine the flow of the interaction
Feedback: communicates the rules to the user
Loops: determine how long the interaction goes on for
The limited output and input capabilities make longer interactions less comfortable. It's therefore important to make them as short as possible hence the prefix "micro". This is also why it's so important to leverage the user's context to speed up/improve efficiency and relevance by providing actionable content and possibly automating some of the tasks.
4. Building Smartwatch Experiences
To further explain design principles, I'll walk you through implementing a simple application that considers both. In doing so we'll get to explore Android Wear, Google's framework for building wearable products.
Information That Moves With You
It seemed that the majority of the industry was focused on creating platforms to host apps for your wrist. This was until Google introduced Android Wear, its answer to Wearables, an extension to the Android platform designed specifically for delivering small chunks of information and facilitating quick and minimal interactions.
Essentially what looks like an extension of Google Now, that is, displaying contextual notifications about things such as traffic, weather alerts, incoming messages, sport scores, and travel updates, delivered to the user on Cards. This works intuitively, rather than creating another isolated system.
What Can Be Developed on Android Wear?
As described in Android Design for Android Wear, the two core functions of Android Wear is Suggest and Demand. Suggest, the most exciting, is nothing more than an extended notification that is delivered to the device, either locally or remotely from the connected handheld.
What is exciting about this is that it forces a shift in thinking of how notifications are used and how we engage with the user. Just like design patterns in software development, the Android Wear design principle encourages the implementation of contextual aware experiences, that is, trying to anticipate the user's needs.
Demand is for cases when Android Wear is unable to anticipate the user's needs and allows the user to initiate a task, relying heavily on voice for user interaction. It's important and emphasized throughout the documentation that the use cases and ergonomics of Android Wear devices differ from handhelds. It needs to work within the constraints of the wearable context rather than trying to squeeze your handheld design thinking onto an Android Wear device.
Google Now has helped set a standard for what can be done with contextual notifications. The following list contain some examples:
activity summary, such as running, biking, walking
nearby events & attractions
nearby offers
relevant and important events and news, such as weather & traffic reports
ticketing, such as boarding passes, coupons, tickets
reminders based on your calendar and current context
behavioral nudges, such as encouraging to stretch when the user has been inactive
sport and stock updates
notes
As mentioned above, Suggest encourages thinking of how to anticipate to the user's needs. To achieve this, you'll more than likely employ an intelligent software agent architecture, an autonomous service that monitors the user's context to proactively perform a task. Some examples of this include:
public transport delay alerts and providing the ability to find an alternative route
nudge the user into a healthier lifestyle by suggesting alternative modes of transport and/or routes to work (bike versus car)
monitor for sales on tagged items and provide the ability to purchase when the price drops
stock alerts with the ability to buy/sell
proactive watching, for example, Twitter and alerting you of possible job opportunities with the ability to flag and review on a more appropriate device
What's notable about the above use cases is that each are actionable and you can envision that they can be provoked using a microinteraction, offloading any heavy lifting to the appropriate device.
5. Building for Android Wear
With Android Wear being an extension of Android means that the majority of APIs available on Android are also available on Android Wear. Of course, there are a number of APIs that aren't available on Android Wear. You can read more about these in the Creating Wearable Apps documentation.
There are also a number of extensions to the platform to better cater to wearable devices. The following sections briefly outline these additions.
Notifications
The easiest way to extend your application and take advantage of Android Wear is through notifications, which are by default automatically delivered to wearables that are paired with a handheld. Because of their inherent constraints, notifications provide an ideal vehicle for engaging the user via an Android Wear device.
Notifications are delivered to a Context Stream, in which the user can quickly scan each notification and engage with those that interest them. Similar to Google Now, information is delivered on Cards to which actions can be attached to make the information actionable.
Apps
Although I have refrained from using the word "app" in this article, with Android Wear it is possible to create Activities and Services. There will be times when this makes sense, for example, when you need to monitor the user's heart rate in the background. Launching a custom Activity or Service is done with Intents with the addition of using voice.
Communicating With a Paired Handheld
Two approaches were introduced to handle communication between an Android Wear device and a paired handheld, synchronizing data items and the MessageApi API.
Data items provide storage and synchronization. The listening device will be notified of any changes. An example of this could be syncing the user's heart rate from the wearable with a paired handheld.
The MessageApi API is a way of sending, non-guaranteed, signals to the paired handheld, for example, sending volume commands to Android TV.
Conclusion
Android Wear provides a flexible framework, giving you the power to dictate the experience. However, it's important that you build appropriate experiences that enrich the user's life with minimal distraction rather than rich complex experiences. Think of Android Wear as lifestyle accessories rather than computing devices.
In the next article, we'll build a simple Android Wear experience to capture the essence of what we've discussed in this article.
In the previous article, I've introduced two design principles aimed at wearables, signals and microinteractions. In this article, we'll create a sample Android Wear project to show how these principles apply in practice.
1. Concept
Imagine you're in the final hour of a bidding war for a much coveted item. The last thing you want, and what often happens, is being outbid just before the bid closes. In this scenario, there are obvious benefits in having a smartwatch that permits you a convenient way of being able to monitor such a bid and make timely actions without disturbing you, the user, too much. In our example project, we'll walk through how we can realize this on an Android Wear device.
The trading site we'll be basing our example on is called TradeMe, my home country's equivalent to eBay. As with the majority of successful online services, TradeMe provides a clean and simple API that exposes the majority of functionality to developers. Because this article is about Android Wear, we'll be focusing just on the code related to Android Wear.
The flow diagram below shows the main logic of our project.
The bulk of the logic is handled by a service, BidWatcherService, on the paired handheld where it routinely pulls down the user's watch list. For each item, the service checks if there have been any changes and if the user has been outbid. For those that match these criteria, the service creates a notification whereby the user is notified of the changes and provided the opportunity to easily take action, for example, increasing their bid.
The actual Android Wear specific code accounts for very little of the overall application but, as hopefully emphasized in this article, the challenge is in designing appropriate contextual experiences rather than the actual implementation. Of course, you could create a custom and complex user interface if you so desire.
2. Extending Notifications for Android Wear
To use features specific to Android Wear, you must ensure your project is referencing the v4 Support Library. We start by obtaining a reference to the system's notification manager during initialization. To do this, we use the NotificationManagerCompat class from the support library rather than the NotificationManager class.
For each of our watch list items that have changed and considered important enough to notify the user, we create and show a notification.
NotificationCompat.Builder notificationBuilder = new NotificationCompat.Builder(this)
.setSmallIcon(R.drawable.small_icon)
.setContentTitle(mContext.getString(R.string.title_auction_update))
.setContentText(item.mTitle);
mNotificationManager.notify(notificationId, notificationBuilder.build());
That's it. We're now able to notify the user of any watched items that have changed. This is shown in the screenshots below.
The above screenshots show the emulated version of our notification on an Android Wear device. The leftmost screenshot shows a preview of the notification. The center and rightmost screenshots show notifications in focus.
We can, as the Android Wear documentation suggests, make this information more glanceable by adding a background image to the notification to give it more context. There are two ways to achieve this. We can set the notification's BigIcon, using the setBigIcon method, or by extending the notification with a WearableExtender object and setting its background image. Because we're focusing on Android Wear, we'll use the WearableExtender class.
As its name suggests, the WearableExtender class is a helper class that wraps up the notification extensions that are specific to wearable devices. The following code demonstrates how we add a background image to our notifications.
NotificationCompat.WearableExtender wearableExtender = new NotificationCompat.WearableExtender();
wearableExtender.setBackground(
BitmapFactory.decodeResource(
getResources(), R.drawable.notification_background));
NotificationCompat.Builder notificationBuilder = new NotificationCompat.Builder(this)
.setSmallIcon(R.drawable.small_icon)
.setContentTitle(mContext.getString(R.string.title_auction_update))
.setContentText(item.mTitle)
.extend(wearableExtender);
We create a WearableExtender object, set its background, and assign it to the notification using the extend method. The following screenshot shows the updated notification.
I have three items on my watch list. At the moment, I have a separate Card for each of the items. When designing notifications for a handheld, we would use a summary notification, but this doesn't translate well to Android Wear devices. For this reason, the concept of a Stack was introduced.
Stacks are created by assigning related notifications to the same group. This allows the user to discard or ignore them as a group or expanding them to handle each notification individually. This is achieved by setting the group of each notification using the setGroup method as shown in the next code block.
NotificationCompat.Builder notificationBuilder = new NotificationCompat.Builder(this)
.setSmallIcon(R.drawable.small_icon)
.setContentTitle(mContext.getString(R.string.title_auction_update))
.setContentText(item.mTitle)
.setGroup(NOTIFICATION_GROUP_KEY)
.extend(wearableExtender);
The following screenshots show examples of notifications being stacked and expanded.
Stacks are a substitute for summary notifications on a handheld. Stacks are not displayed on a handheld and you therefore need to explicitly create a summary notification for handhelds. Similar to what we did in the above code block, set the notification's group, using the setGroup method, to the stack group, but also set group summary to true by invoking the setGroupSummary method.
In some instances, you may want to display more detail to the user. This can be useful for giving the user additional information without requiring them to pull out their handheld. Android Wear has Pages for this exact reason. Pages allow you to assign additional Cards to a notification to expose more information. These are revealed by swiping left.
To add an additional page, we simply create a new notification and add it to our WearableExtender object using the addPage method.
BigTextStyle autionDetailsPageStyle =
new NotificationCompat.BigTextStyle()
.setBigContentTitle(mContext.getString(R.string.title_auction_details))
.bigText(String.format(this.getString(
R.string.copy_notification_details),
item.mMaxBidAmount,
item.getTimeRemainingAsString(),
item.mBidCount));
Notification detailsPageNotification = new NotificationCompat.Builder(this)
.setSmallIcon(R.drawable.small_icon)
.setStyle(autionDetailsPageStyle)
.build();
NotificationCompat.WearableExtender wearableExtender = new NotificationCompat.WearableExtender();
wearableExtender.setBackground(
BitmapFactory.decodeResource(getResources(),
R.drawable.notification_background));
wearableExtender.addPage(detailsPageNotification);
The following screenshots show a notification with two pages. We are now providing the user with timely and relevant information.
The final step is making this information actionable. To do this, we add actions just like we did with notifications earlier. The two actions we add allow the user to automatically increase their bid or explicitly set their bid.
Let's first add an automatic bid. The following code snippet should look familiar to any Android developer.
The following screenshots show the action along with the confirmation state.
With the second action, we want to enable the user to set a specific price. Working with the constraints of the Android Wear device our options are:
launch the appropriate screen on the handheld
provide a stepper control the user can use to increment the current bid
provide the user with some predefined options
allow the user to use their voice
One of the attractive aspects of Android Wear is its architecture and design towards voice. This makes sense giving the form factor and context in which a wearable device like a smartwatch is used.
Implementing this is similar to the above, but, in addition to a RemoteInput object, we instantiate and assign a RemoteInput object to the action. The RemoteInput instance takes care of the rest.
The RemoteInput object takes a string in the constructor. This string, EXTRA_BID_AMOUNT, is the identifier used by the broadcast receiver when retrieving the result as shown below.
The following screenshot shows an example of a RemoteInput instance in action.
An obvious extension to this would be to enable the user to explicitly request an update. To implement this, you would create an Activity for the Android Wear device that listens for voice commands. Once received, broadcast the request to the paired mobile device and finish the Activity. But that's for another time.
Conclusion
That concludes our example project in which we now offer the user relevant and actionable information, delivering it to them with minimal disruption. As mentioned in the previous article, Android Wear let's you implement anything you want, but I hope this article has shown how enhanced notifications are an efficient and effective way to extend your service to Android Wear devices.
In the previous article, I've introduced two design principles aimed at wearables, signals and microinteractions. In this article, we'll create a sample Android Wear project to show how these principles apply in practice.
1. Concept
Imagine you're in the final hour of a bidding war for a much coveted item. The last thing you want, and what often happens, is being outbid just before the bid closes. In this scenario, there are obvious benefits in having a smartwatch that permits you a convenient way of being able to monitor such a bid and make timely actions without disturbing you, the user, too much. In our example project, we'll walk through how we can realize this on an Android Wear device.
The trading site we'll be basing our example on is called TradeMe, my home country's equivalent to eBay. As with the majority of successful online services, TradeMe provides a clean and simple API that exposes the majority of functionality to developers. Because this article is about Android Wear, we'll be focusing just on the code related to Android Wear.
The flow diagram below shows the main logic of our project.
The bulk of the logic is handled by a service, BidWatcherService, on the paired handheld where it routinely pulls down the user's watch list. For each item, the service checks if there have been any changes and if the user has been outbid. For those that match these criteria, the service creates a notification whereby the user is notified of the changes and provided the opportunity to easily take action, for example, increasing their bid.
The actual Android Wear specific code accounts for very little of the overall application but, as hopefully emphasized in this article, the challenge is in designing appropriate contextual experiences rather than the actual implementation. Of course, you could create a custom and complex user interface if you so desire.
2. Extending Notifications for Android Wear
To use features specific to Android Wear, you must ensure your project is referencing the v4 Support Library. We start by obtaining a reference to the system's notification manager during initialization. To do this, we use the NotificationManagerCompat class from the support library rather than the NotificationManager class.
For each of our watch list items that have changed and considered important enough to notify the user, we create and show a notification.
NotificationCompat.Builder notificationBuilder = new NotificationCompat.Builder(this)
.setSmallIcon(R.drawable.small_icon)
.setContentTitle(mContext.getString(R.string.title_auction_update))
.setContentText(item.mTitle);
mNotificationManager.notify(notificationId, notificationBuilder.build());
That's it. We're now able to notify the user of any watched items that have changed. This is shown in the screenshots below.
The above screenshots show the emulated version of our notification on an Android Wear device. The leftmost screenshot shows a preview of the notification. The center and rightmost screenshots show notifications in focus.
We can, as the Android Wear documentation suggests, make this information more glanceable by adding a background image to the notification to give it more context. There are two ways to achieve this. We can set the notification's BigIcon, using the setBigIcon method, or by extending the notification with a WearableExtender object and setting its background image. Because we're focusing on Android Wear, we'll use the WearableExtender class.
As its name suggests, the WearableExtender class is a helper class that wraps up the notification extensions that are specific to wearable devices. The following code demonstrates how we add a background image to our notifications.
NotificationCompat.WearableExtender wearableExtender = new NotificationCompat.WearableExtender();
wearableExtender.setBackground(
BitmapFactory.decodeResource(
getResources(), R.drawable.notification_background));
NotificationCompat.Builder notificationBuilder = new NotificationCompat.Builder(this)
.setSmallIcon(R.drawable.small_icon)
.setContentTitle(mContext.getString(R.string.title_auction_update))
.setContentText(item.mTitle)
.extend(wearableExtender);
We create a WearableExtender object, set its background, and assign it to the notification using the extend method. The following screenshot shows the updated notification.
I have three items on my watch list. At the moment, I have a separate Card for each of the items. When designing notifications for a handheld, we would use a summary notification, but this doesn't translate well to Android Wear devices. For this reason, the concept of a Stack was introduced.
Stacks are created by assigning related notifications to the same group. This allows the user to discard or ignore them as a group or expanding them to handle each notification individually. This is achieved by setting the group of each notification using the setGroup method as shown in the next code block.
NotificationCompat.Builder notificationBuilder = new NotificationCompat.Builder(this)
.setSmallIcon(R.drawable.small_icon)
.setContentTitle(mContext.getString(R.string.title_auction_update))
.setContentText(item.mTitle)
.setGroup(NOTIFICATION_GROUP_KEY)
.extend(wearableExtender);
The following screenshots show examples of notifications being stacked and expanded.
Stacks are a substitute for summary notifications on a handheld. Stacks are not displayed on a handheld and you therefore need to explicitly create a summary notification for handhelds. Similar to what we did in the above code block, set the notification's group, using the setGroup method, to the stack group, but also set group summary to true by invoking the setGroupSummary method.
In some instances, you may want to display more detail to the user. This can be useful for giving the user additional information without requiring them to pull out their handheld. Android Wear has Pages for this exact reason. Pages allow you to assign additional Cards to a notification to expose more information. These are revealed by swiping left.
To add an additional page, we simply create a new notification and add it to our WearableExtender object using the addPage method.
BigTextStyle autionDetailsPageStyle =
new NotificationCompat.BigTextStyle()
.setBigContentTitle(mContext.getString(R.string.title_auction_details))
.bigText(String.format(this.getString(
R.string.copy_notification_details),
item.mMaxBidAmount,
item.getTimeRemainingAsString(),
item.mBidCount));
Notification detailsPageNotification = new NotificationCompat.Builder(this)
.setSmallIcon(R.drawable.small_icon)
.setStyle(autionDetailsPageStyle)
.build();
NotificationCompat.WearableExtender wearableExtender = new NotificationCompat.WearableExtender();
wearableExtender.setBackground(
BitmapFactory.decodeResource(getResources(),
R.drawable.notification_background));
wearableExtender.addPage(detailsPageNotification);
The following screenshots show a notification with two pages. We are now providing the user with timely and relevant information.
The final step is making this information actionable. To do this, we add actions just like we did with notifications earlier. The two actions we add allow the user to automatically increase their bid or explicitly set their bid.
Let's first add an automatic bid. The following code snippet should look familiar to any Android developer.
The following screenshots show the action along with the confirmation state.
With the second action, we want to enable the user to set a specific price. Working with the constraints of the Android Wear device our options are:
launch the appropriate screen on the handheld
provide a stepper control the user can use to increment the current bid
provide the user with some predefined options
allow the user to use their voice
One of the attractive aspects of Android Wear is its architecture and design towards voice. This makes sense giving the form factor and context in which a wearable device like a smartwatch is used.
Implementing this is similar to the above, but, in addition to a RemoteInput object, we instantiate and assign a RemoteInput object to the action. The RemoteInput instance takes care of the rest.
The RemoteInput object takes a string in the constructor. This string, EXTRA_BID_AMOUNT, is the identifier used by the broadcast receiver when retrieving the result as shown below.
The following screenshot shows an example of a RemoteInput instance in action.
An obvious extension to this would be to enable the user to explicitly request an update. To implement this, you would create an Activity for the Android Wear device that listens for voice commands. Once received, broadcast the request to the paired mobile device and finish the Activity. But that's for another time.
Conclusion
That concludes our example project in which we now offer the user relevant and actionable information, delivering it to them with minimal disruption. As mentioned in the previous article, Android Wear let's you implement anything you want, but I hope this article has shown how enhanced notifications are an efficient and effective way to extend your service to Android Wear devices.
Each release of Xcode presents developers with enhanced tools to help building their apps. This year's release, Xcode 6, introduces new ways for developers to design and build their software. In this tutorial, I'll outline the new and improved features in Xcode 6 and take a look at how you can use them.
1. Playgrounds
During this year's WWDC, Apple introduced Swift, a new programming language for developing software for its devices. In line with this, Xcode 6 comes with a new feature called Playgrounds that provides an interactive work area where developers can write Swift code and get live feedback without having to run the code on a device or simulator. This is a nice addition to Xcode as you can now experiment with code and get quick, real-time results before incorporating it into your main code base.
2. Interface Builder
A major topic at this year's WWDC was building adaptive applications. Instead of building applications that target specific screen sizes, developers are encouraged to develop applications that adapt to the device they run on, irrespective of its screen size.
This is a move that started a couple of releases back with the introduction of Auto Layout in iOS 6, enabling developers to create apps that work on both the 3.5" and 4.0" screens. It's now been further improved to enable iOS developers to build apps that run on all supported iPhones, including the new 4.7" iPhone 6 and 5.5" iPhone 6 Plus, and iPads using the same code base.
Interface Builder has undergone major changes that enable developing such adaptive apps. New features have also been added that improve the user interface design process. We will look at these new changes next.
Size Classes
Size classes define the canvas size used in layouts. They allow you to specify how the application's user interface changes when the available size of your view controller changes. This makes it possible to have a unified storyboard when building a universal application. Previously you had to design two separate storyboards, one for iPad and one for iPhone.
A size class identifies a relative amount of display space for the height (vertical dimension) and width (horizontal dimension). There are currently two size classes, compact and regular. For example, an iPhone in portrait will have a compact width and regular height. An iPad will have a regular width and height in both portrait and landscape orientations.
But you should note that a size class doesn't necessarily map to one device in one orientation. For instance, an iPad can have a view with an iPhone style layout (a compact horizontal and a regular vertical size class) when presented on a smaller space on the device, and an iPad style layout (a regular horizontal and a regular vertical size class) when the available space is larger.
You change size classes by using the Size Classes control near the layout toolbar at the bottom of the Interface Builder canvas. Interface Builder starts you out in the any width and any height size class where you can lay out common user interface components and constraints for the different screen sizes and orientations. You then update the parts that need to change when the available screen size changes by making changes to the user interface in the different size classes.
Adaptive Segue Types
Xcode 6 introduces adaptive segue types that are more suitable for the new adaptive layouts since they present views differently according to the environment they are run in. For example, using Show Detail with a Split View on an iPad will replace the Detail, but on an iPhone it's going to push that Detail aside onto the Master. Some of the old segues, such as push and modal, are now deprecated.
Live Rendering
The Interface Builder canvas is more interactive than ever. Previously, you had to run your app to see changes related to custom objects, custom fonts, and localization. Now, you can select custom fonts from the Interface Builder font picker and have them show up in the Interface Builder canvas.
You can even create custom objects and have them render on the Interface Builder canvas. You do this by creating a custom framework, adding your custom class to that target, and marking that class with the @IBDesignable flag (IB_DESIGNABLE in Objective-C). This lets Interface Builder know that a class can display custom content on its canvas.
Other than being able to view custom objects in Interface Builder, you can also mark properties with the @IBInspectable flag and have them appear in the Interface Builder inspector menu, in which they can be edited just like any other properties on your views. It is not a requirement for a class to be marked designable for it to have inspectable properties.
You can also specify design time only code. You can use this, for example, to pre-populate the view with example data to get a more accurate feel for the interface. You do this by overriding the prepareForInterfaceBuilder method. Other than that, you can use #if TARGET_INTERFACE_BUILDER to opt code in or out of being run in the final Interface Builder rendering.
Preview Editor
The Preview Editor now allows you to view multiple previews of different simulated devices side by side. Not only can you see how your app looks on different devices, but you can also set each of the devices to be in either portrait or landscape mode. This provides a fast way to preview your app's user interface on different devices and orientations without first running it.
3. Game Development
Apple added new game technologies to Xcode 6 and iOS 8, namely SceneKit and Metal. SceneKit, which was previously available on OS X, is a 3D scene renderer. Metal is a framework that can be used to create highly optimized graphics rendering and computational tasks thanks to its low-overhead access to the A7 and A8 GPU.
SpriteKit has also been improved with per-pixel physics occlusion, physics fields, universal kinematics and constraints, shaders, lightings, and shadows.
Another significant new feature in SpriteKit is the SpriteKit Level Editor that lets you visually assemble scenes. Just as you can create your user interface in Interface Builder without writing any code, you can do the same when in a SpriteKit game with the SpriteKit Level Editor.
4. OS X Development
Storyboards
Storyboards have now been introduced to OS X development. Just as in iOS development, they let you set up your view layouts and wire views together with different segue animations. At the time of writing, some features, including storyboards, are still disabled in Xcode (6.0.1) for OS X development pending the OS X Yosemite release.
Gesture Recognizers
Gesture recognizers are now available in AppKit. These are used pretty much in the same way as in iOS development. You can view the available gestures in the Object Library in Interface Builder.
5. Localization
Localization is done differently in Xcode 6 than it was previously. You can now export all of your localizable content into XLIFF, which is the industry standard that's understood by a lot of translation services. When you get the translations back, you import them and Xcode will merge the new content into your project. You should have one XLIFF file for each language you support in your app.
You can now preview localized content without changing your device's or simulator's locale in Settings. To do this, select Product > Scheme > Edit Scheme, then select Run and click on the Options tab. You can select your language of choice from the Application Language menu. Xcode comes with the Double Length Pseudolanguage that you can test with if you haven't added any other language. When you run the app, you should see the localized content.
You can also view localized content without running your app. To do this, you use the Preview Editor to switch between the different languages that your app supports. The default language will display in the bottom-right corner of the editor and when you click on it, you are presented with a list of the available languages. To test it out without adding a language, you can use the Double Length Pseudolanguage.
6. iOS Simulator
Named Devices
Xcode 6 now presents named simulators that correspond to specific devices, such as iPhone 5s, instead of the previous generic names, such as 64-bit iPhone Retina.
Resizable Simulator
Among the devices you can choose from are the resizable iPhone and resizable iPad. These allow you to specify the width, height, and size classes of the simulator. With this, you can test the adaptivity of your app on all of Apple's existing devices as well as any future devices, without needing to download a simulator for each device.
Simulator Custom Configurations
With the new iOS simulator, you can keep data and configuration settings grouped together. Run one configuration for one version of an app with its own data and another configuration for a different app version. This means that you can simulate having multiple users on your machine. Each user will have their own data and configurations.
7. HomeKit Accessory Simulator
The HomeKit framework allows your app to communicate with and control connected accessories in a user’s home. In the beta versions of Xcode 6, the HomeKit Accessory Simulator came as part of Xcode, but it's is now part of the Hardware I/O Tools for Xcode. You can download it at the iOS Dev Center.
8. Debugging
View Debugging
Xcode 6 makes debugging your app's user interface much easier with the live view debugging feature. You are now able to pause your running app and dissect the paused user interface in a 3D view. The view debugger shows you your view hierarchy and Auto Layout constraints. If you select a view, you can inspect its properties in the inspector or jump to the relevant code in the assistant editor. With this, you can inspect issues such issues as Auto Layout conflicts, see why a view is hidden or clipped, etc.
To start the live view debugger, launch your app and click the Debug View Hierarchy button on the debug toolbar.
Your app pauses and you're presented with a 3D visualization of its user interface. You can drag anywhere on the canvas to rotate the view.
You can switch between various view states with the buttons below the canvas.
From left to right:
Show Clipped Content: This option hides or shows content that's being clipped in a selected view.
Show Constraints: It shows the Auto Layout constraints of a selected view.
Reset Viewing Area: This resets the canvas to its default state.
Adjust View Mode: This mode lets you select how you want to see your views. There's an option to see it as a wireframe, the view's contents, or both.
Zoom Out, Actual Size, Zoom In: This lets you set the view's scale.
Quick Look
Quick Look was introduced in Xcode 5 and it enables you to view an object's contents when debugging. Quick Look supports common objects like images, bezier paths, map locations, etc.
In Xcode 6, this has been improved to support two new object types, views (UIView and NSView) and custom objects. To enable Quick Look for custom objects, you implement the debugQuickLookObject method in the custom class.
Enhanced Queue Debugging
The debug navigator records and displays recently executed blocks as well as enqueued blocks. You can use it to see where your enqueued blocks are and to examine the details of what’s been set up to execute. You can enable block debugging by selecting the Debug > Debug Workflow > Always Show Pending Blocks in Queues menu option.
Debug Gauges
Debug gauges provide information about your app's resource usage while debugging. Xcode 6 features updated gauges, which include graphing profiling for the new Metal framework and iCloud support for documents in the Cloud and CloudKit features.
Other than these improvements, Xcode 6 introduces two new debug gauges, network and disk activity.
Network activity shows how much data your app is sending and receiving as well as a list of open connections. You can view a history timeline to monitor the network usage, helping you work out when and why spikes in network usage or network failures happened.
Disk activity shows real-time information of your app's reads and writes to disk. It also gives information on all the open files. There is a history timeline of this disk I/O activity for you to monitor.
9. Asset Catalog
Asset catalogs now support size classes. This means you can now easily adapt your user interface for compact and regular height or width by providing different images for each size class.
Previously asset catalogs only supported PNG images, but in Xcode 6, support for JPEG and PDF vector images has been added.
10. Launch Images
You can use a XIB or storyboard as your application's launch image. The operating system generates the necessary launch images for your app. With this, you don't need to provide individual assets for the launch images and you can design it in Interface Builder.
To set a XIB or storyboard as your app's launch image, select the project in the Project Navigator and choose a target from the list of targets. Under the General tab, locate the section App Icons and Launch Images and select the correct file from the menu labeled Launch Screen File.
11. Testing
Asynchronous Testing
New APIs have been added to XCTest framework that enable testing asynchronous code. This is done through expectation objects, XCTestExpectation, which describe expected events. XCTestCase has a new API that waits for the expectation to fulfill and sets a timeout on it. A completion handler is called either when all the events are fulfilled or when the timeout it hit. It can be waiting on multiple asynchronous events at the same time. You can now easily test for system interactions that execute asynchronously, such as file I/O, network requests, etc.
Performance Measurement
The enhanced XCTest framework can now quantify the performance of each part of an app. Xcode runs your performance tests and lets you define a baseline performance metric. Each subsequent test run compares performance, displays the change over time, and—by highlighting the problem area—alerts you to sudden regressions a code commit could introduce. If the average performance measure deviates considerably from the baseline, the test will fail. This is a great way to detect performance regressions in your app.
Test Profiling
With the introduction of performance testing comes the ability to profile tests in Instruments. You can select a test or test suite to profile and do further investigation and analysis in Instruments to find out why the test failed and find the cause of the regression.
12. Instruments
Instruments has an updated user interface. With the new template chooser, you can select your device and target as well as the starting point for your profiling session.
There is a new Counters template that has been combined with Events to provide a powerful view into individual CPU events. You can even specify formulas to measure event aggregates, ratios, and more.
In Xcode 6, Instruments also ships with support for Swift and you can also use it to profile app extensions. There's also support for simulator configurations. The simulator configurations are treated like devices by Instruments, making it easy to launch or attach to processes in the simulator.
Conclusion
Apple continues to improve its developer tools and this is seen in every major release of Xcode. Xcode 6 improves on its predecessors to give developers tools that will improve their workflow and make the whole development process significantly better.
No one wants to ship buggy software. Ensuring that you release a mobile application of the highest quality requires much more than a human-driven manual quality assurance process. New devices and operating systems are released to the public each year. This means that there is an ever expanding combination of screen sizes and operating system versions on which you must test your mobile application. Not only would it be extremely time consuming, but attempting to test your iOS application by manual testing neglects an entire piece of the modern software engineering process, automated quality assurance testing.
In today’s world, there are many tools available that can be used to automatically test the software you write. Some of these tools are maintained through an open source model, but there is also a core set provided by Apple. With each new release of the iOS SDK, Apple has continued to show their commitment towards improving the tools available for developers to test the code they write. For the iOS developer who is new to automated testing and interested to get started, Apple’s tools are a good place to start.
1. Apple Provides Helpful Tools
This tutorial is going to provide instructions for using a tool that Apple provides for automated testing, XCTest. XCTest is Apple’s unit testing framework. Unit testing is the type of automated testing that verifies code at the lowest level. You write Objective-C code that calls methods from your "production" code and verify that the code under test actually does what it's intended to do. Are variables set correctly? Is the return value correct?
Tests written with the XCTest framework may be repeatedly executed against your application's code to help you gain confidence that you are creating a bug free product in that new code changes aren't breaking existing functionality.
By default, every new Xcode project is created with a good starting point for writing unit tests. This includes three things:
a separate target for your tests
a group for your test classes
an example test
Let's dig into the structure of an iOS unit test. An individual unit test is represented as a single method within any subclass of XCTestCase where the method returns void, takes no parameters, and the method name begins with test.
- (void) testSomething{}
Luckily, Xcode makes creating test cases easy. With new Xcode projects, an initial test case is created for you in a separate file group whose name is suffixed by the word Tests.
2. Creating Your First iOS Unit Test
I've created a sample project that can be used as a reference for the examples provided in this tutorial. Download the project from GitHub and open it in Xcode.
Step 1: Create the Test Case Class
In the sample project, you can find the group of tests in the folder named JumblifyTests.
To create your first test case, right click the file group, JumblifyTests, and select New File. Choose Test Case Class from the iOS > Source section, and give the new subclass a name.
The typical naming convention is to name the test case such that it is the name of the corresponding class under test, suffixed with Tests. Since we'll be testing the JumblifyViewController class, name the XCTestCase subclass JumblifyViewControllerTests.
Step 2: Remove the Boilerplate Code
In the brand new XCTestCase subclass, you’ll see four methods. Two of these are tests themselves. Can you identify which they are? Remember that test method names begin with the word "test".
If you didn't figure it out, the test methods created by default are testExample and testPerformanceExample.
Delete both tests, because we're going to write ours from scratch. The other two methods, setUp and tearDown, are overridden from the superclass, XCTestCase. They are unique in that setUp and tearDown are called before and after each test method is invoked respectively. They are useful places to centralize code that should be executed before or after each test method is called. Tasks like common initialization or cleanup go here.
Step 3: Connect Your Test With Your Class Under Test
Import the header file of the JumblifyViewController class and add a property of type JumblifyViewController to the XCTestCase subclass.
We're now going to write a test to test the reverseString: method of the JumblifyViewController class.
Create a test method that uses the instantiated vcToTest object to test the reverseString: method. In this test method, we create an NSString object and pass it to the view controller's reverseString: method. It's common convention to give your test a meaningful name to make it clear what the test is testing.
At this point, we haven't done anything useful yet, because we haven't tested the reverseString: method yet. What we need to do is compare the output of the reverseString: method with what we expect the output to be.
The XCTAssertEqualObjects function is part of the XCTest framework. The XCTest framework provides many other methods to make assertions about application state, such as variable equality or boolean expression results. In this case, we have stated that two objects must be equal. If they are, the test passes and if they aren't, the test fails. Take a look at Apple’s documentation for a comprehensive list of assertions provided by the XCTest framework.
- (void)testReverseString {
NSString *originalString = @"himynameisandy";
NSString *reversedString = [self.vcToTest reverseString:originalString];
NSString *expectedReversedString = @"ydnasiemanymih";
XCTAssertEqualObjects(expectedReversedString, reversedString, @"The reversed string did not match the expected reverse”);
}
If you try to compile the code at this point, you'll notice a warning when you attempt to call reverseString: from the test case. The reverseString: method is a private method of the JumblifyViewController class. This means that other objects cannot invoke this method since it's not defined in the header file of the JumblifyViewController class.
While writing testable code is a mantra that many developers follow, we don't want to unnecessarily modify our code under test. But how do we call the private reverseString: method of the JumblifyViewController class in our tests? We could add a public definition of the reverseString: method to the header file of the JumblifyViewController class, but that breaks the encapsulation pattern.
Step 5: Adding a Private Category
One solution is to add a private category on the JumblifyViewController class to expose the reverseString: method. We add this category to the XCTestCase subclass, which means it's only available in that class. By adding this category, the test case will compile without warnings or errors.
Let's run our tests to ensure that they pass. There are several ways to run unit tests for an iOS application. I'm a keyboard shortcut junkie so my most used technique for running my unit tests for my application is by pressing Command-U. This keyboard shortcut will run all the tests for your application. You can also perform the same action by selecting Test from the Product menu.
As your test suite grows, or if you like implement test driven development, you'll find that running your test suite can become too time consuming. Or it might get in the way of your workflow. An very useful command, buried within Xcode's menu, that I've fallen in love with is Command-Option-Control-U. This shortcut triggers a command that runs the test your cursor is currently in. Once you have your test suite fleshed out and finalized, you should always run the entire test suite. Running an individual test is useful as you're writing a new test test or when you're debugging a failing test.
The command to run one test is complemented by Command-Option-Control-G, which reruns the last test run. This can be the entire test suite or only the most recent test you are working on. It's also useful in case you've navigated away from whatever test you're working on and you're still in the process of debugging it.
Step 7: Reviewing the Results
You can see your test results in a couple of places. One of those places is the Test Navigator on the right.
Another option is by looking at the gutter of the Source Editor.
In either of these two places, clicking the green diamond with the white checkmark will rerun that particular test. In the case of a failed test, you'll see a red diamond with a white cross in the center. Clicking it will also rerun that particular test.
3. New in Xcode 6
Xcode 6 introduced two new exciting additions to unit testing on iOS and OS X, testing asynchronous functionality and measuring performance of a specific piece of code.
Asynchronous Testing
Prior to Xcode 6, there was no good way to unit test asynchronous code. If your unit test called a method that contained asynchronous logic, you couldn't verify the asynchronous logic. The test would complete before the asynchronous logic in the method under test was executed.
To test asynchronous code, Apple has introduced an API that allows developers to define an expectation that must be fulfilled for the test to complete successfully. The flow is as follows, define an expectation, wait for the expectation to be fulfilled, and fulfill the expectation when the asynchronous code has finished executing. Take a look at the below example for clarification.
In this example, we’re testing the doSomethingThatTakesSomeTimesWithCompletionBlock method. We want to hinge success or failure of our test on the value that is returned in the completion block called by the method under test.
To do this, we define an expectation at the start of the test method. At the end of the test method, we wait for the expectation to be fulfilled. As you can see, we can also pass in a timeout parameter.
The actual assertion of the test is made inside the completion block of the method under test in which we also fulfill the expectation we defined earlier. As a result, when the test is run, the test waits until the expectation is fulfilled or it fail if the timeout expires and the expectation isn't fulfilled.
Performance Testing
Another addition to unit testing in Xcode 6 is the ability to measure the performance of a piece of code. This allows developers to gain insight into the specific timing information of the code that's being tested.
With performance testing, you can answer the question "What is the average time of execution for this piece of code?" If there is a section that is especially sensitive to changes in terms of the time it takes to execute, then you can use performance testing to measure the amount of time it takes to execute.
You can also define a baseline execution time. This means that if the code that's being tested significantly deviates from that baseline, the test fails. Xcode will repeatedly execute the code that's being tested and measure its execution time. To measure the performance of a piece of code, use the measureBlock: API as shown below.
Set or edit the baseline time of execution for the performance test.
Conclusion
In this tutorial, you've learned how to use Xcode to create unit tests to verify an iOS application in a programmatic and automated way. Give it a try, either on an existing code base or something brand new. Whether you make a full commitment to unit testing or add a couple tests here and there, you're only adding value to your project through writing more strongly verified software that's less prone to breaking with future changes. Unit testing is only the beginning of automated software testing. There are several additional layers of testing you can add to an iOS application.
As you create applications with Xamarin.Forms, you will no doubt like the simplicity of creating user interfaces. Using Xamarin.Forms, you are able to use the same terminology for controls across multiple platforms.
While this concept can be very powerful, as a designer or a developer, it can be somewhat limiting. It may seem like we're forced to use the native user interface controls that come with each of the platforms without the ability to add customization. This is not the case.
In order to get into the process of customizing the user interface for specific platforms, you must first understand the rendering process of Xamarin.Forms.
2. Control Rendering
When it comes to using Xamarin.Forms to create a user interface for your cross platform mobile application, there are two important pieces to the puzzle that you must understand.
Element
The first piece of the puzzle is the element. You can think of an element as the platform agnostic definition of a control within Xamarin.Forms. If you have read through the documentation at all, you will know that these controls are also referred to as View objects. To be even more specific, every element within Xamarin.Forms derives from the View class.
These elements are used to describe a visual element. The elements provide a platform agnostic definition of characteristics of how the control should look and behave. An element on it's own can't actually create a control that is displayed to the user. It needs some help. This is where the second piece of the rendering process comes in, a renderer.
Renderer
A renderer comes into play when you run your application. The job of the renderer is to take the platform agnostic element and transform it into something visual to present to the user.
For example, if you were using a Label control in your shared project, during the running of your application, the Xamarin.Forms framework would use an instance of the LabelRenderer class to draw the native control. If you are starting to wonder how this happens from a shared code project, that's a very good question. The answer is, it doesn't.
Let's illustrate this with an example. Start by opening either Xamarin Studio or Visual Studio. The process and concepts are the same for both. If you are using Xamarin Studio, there is no support for Windows Phone projects so you will only create three projects in your solution. If you are using Visual Studio, you will create four projects.
In Visual Studio, create a new project and select the Mobile Apps project family on the left and choose the Blank App (Xamarin.Forms Portable) project template on the right. You can name your project anything you like, but if you wish to follow along with me, then use the name Customization, and the click OK.
Now, depending on your IDE, you should see either three or four projects in your solution. If you expand the References folder in your Customization (Portable) project, you will see an assembly reference to Xamarin.Forms.Core. This is where all the different elements are defined for you to use in your shared user interface project. Nothing out of the ordinary there.
If you open each of the platform specific projects and expand their References folders, you'll see that each one contains a platform specific implementation of that Xamarin.Forms library, named Xamarin.Forms.Platform.Android, Xamarin.Forms.Platform.iOS, and Xamarin.Forms.Platform.WP8 respectively.
It's in these assemblies that you'll find the renderers for each of the Xamarin.Forms elements. Now you are beginning to see the layout of the process. The platform agnostic elements, or View objects, are in the shared code project, but all the specific renderers for the elements are in the platform specific projects.
This means that for each of the elements you use, there will be two renderers created in Xamarin Studio, and three in Visual Studio. Now that you see how this is structured in Xamarin.Forms, the next logical question is usually, "When should I use customizations?".
3. When to Customize
There are definitely a good number of properties and characteristics that are defined within Xamarin.Forms elements that can be used to customize the final control on each of the platforms. Having said that though, not every customization available in each of the platforms exists in Xamarin.Forms. That being the case, there are two main scenarios when you will want to create customizations.
The first scenario when customizations will be needed is when you want to create a completely custom control. Let's say you wanted to create a calendar control or maybe some sort of graphing control. Unfortunately nothing like that exists today in Xamarin.Forms, which is not to say that it never will.
This is definitely a situation where you will need to start from square one and create everything from scratch. You will need to define the element you are going to use to describe the characteristics of the control in a platform agnostic manner. Then you will also need to create a custom renderer for each of the platforms you wish to support.
Depending on what you are building, this can be a rather extensive project. That being the case, I will save that for another tutorial in and of itself. Instead, in this tutorial, we will be focusing on the second scenario in which you will need some customization.
The second situation that you'll find yourself needing some customization is when a built-in element doesn't support a specific feature of a platform you wish to support. An example of this would be on the Label control. In Xamarin.Forms, there is no mechanism, or property, that allows you to create the equivalent on each of the platforms to make the text bold or italic. This may seem like a very simple scenario, but you will find that the basic process of making this change available in the element and have the renderer understand it will be the same here as in some of the more complex scenarios.
With the second scenario in mind, you have two options. You can either replace the existing renderer for a specific platform (or for all platforms) and create your own functionality and drawing logic for all the capabilities of the element. Alternatively, you can create your own element that derives from the existing element and associate that new element with a custom renderer. This way, you will retain all the default logic and rendering capabilities of the base element and customize it as you wish. This will be the route we take for this example. Now, let's see how to add this functionality to our own project.
4. Adding Customization
Let's start this process by setting up the basic structure of our application so we can see our baseline and then make changes. Start by opening your App.cs file in the Customization (Portable) project in the Solution Explorer. Modify the GetMainPage method to look like this:
public static Page GetMainPage() {
var iLabel = new Label {
TextColor = Color.Black,
Text = "I want to be italicized!",
HorizontalOptions = LayoutOptions.CenterAndExpand
};
var bLabel = new Label
{
Text = "I want to be bold!",
TextColor = Color.Black,
HorizontalOptions = LayoutOptions.CenterAndExpand
};
var bothLabel = new Label
{
Text = "I want to be italicized and bold!",
TextColor = Color.Black,
HorizontalOptions = LayoutOptions.CenterAndExpand
};
return new ContentPage
{
BackgroundColor = Color.White,
Content = new StackLayout {
Padding = 100,
Spacing = 100,
Children = { iLabel, bLabel, bothLabel }
}
};
}
As you can see here, we have created three simple Label controls. One wants to be italicized, one wants to be bold, and the third is greedy and wants to be both. If you were to run this application on iOS, Android, and Windows Phone, they would look something like this:
iOS
Android
Windows Phone
As you can see, they don't want to be this boring. Well, don't just sit there, help them out.
Step 1: Creating a New Element
The first thing we need to do is create a new element that we can use to provide additional customizations to the existing Label control. Start by adding a new class to your Customization (Portable) project and name it StyledLabel. Replace its contents with the following:
public enum StyleType {
Italic,
Bold,
BoldItalic
}
public class StyledLabel : Label
{
public StyleType Style { get; set; }
}
We define a very simple enumeration and class. We have defined the enumeration to allow for italic, bold, and bold plus italic values. We then create a class StyledLabelthat derives from the Labelbase class and add a new property, Style,to hold the appropriate style we want to apply to the control.
To make sure everything still works, and it should, let's modify the App.cs file once more and replace the Label elements in our first example with our new StyledLabel elements. Because the StyleLabel class inherits from the Label class, everything should still work.
public static Page GetMainPage() {
var iLabel = new StyledLabel {
TextColor = Color.Black,
Text = "I want to be italicized!",
HorizontalOptions = LayoutOptions.CenterAndExpand,
Style = StyleType.Italic
};
var bLabel = new StyledLabel
{
Text = "I want to be bold!",
TextColor = Color.Black,
HorizontalOptions = LayoutOptions.CenterAndExpand,
Style = StyleType.Bold
};
var bothLabel = new StyledLabel
{
Text = "I want to be italicized and bold!",
TextColor = Color.Black,
HorizontalOptions = LayoutOptions.CenterAndExpand,
Style = StyleType.BoldItalic
};
return new ContentPage
{
BackgroundColor = Color.White,
Content = new StackLayout {
Padding = 100,
Spacing = 100,
Children = { iLabel, bLabel, bothLabel }
}
};
}
Once again, here are the results of this change.
iOS
Android
Windows Phone
As you can see, nothing has changed. Now that we have a new custom element, it is time to create the custom renderers to take care of the native controls.
Step 2: Android Renderer
The first step to creating a renderer is to add a new class to the platform you are targeting. We will be starting with the Xamarin.Android project. Within this project, create a new class file and name it StyledLabelRenderer and replace its contents with the following:
using Android.Graphics;
using Customization;
using Customization.Droid;
using Xamarin.Forms;
using Xamarin.Forms.Platform.Android;
[assembly: ExportRenderer(typeof(StyledLabel), typeof(StyledLabelRenderer))]
namespace Customization.Droid
{
public class StyledLabelRenderer : LabelRenderer {
protected override void OnElementChanged( ElementChangedEventArgs<Label> e ) {
base.OnElementChanged( e );
var styledLabel = (StyledLabel)Element;
switch (styledLabel.Style)
{
case StyleType.Bold:
Control.SetTypeface(null, TypefaceStyle.Bold);
break;
case StyleType.Italic:
Control.SetTypeface(null, TypefaceStyle.Italic);
break;
case StyleType.BoldItalic:
Control.SetTypeface(null, TypefaceStyle.BoldItalic);
break;
}
}
}
}
We start with a special assembly attribute that tells Xamarin.Forms to use this StyledLabelRenderer class as the renderer every time it tries to render StyledLabel objects. This is required for your customizations to work properly.
Just like when we created a new StyledLabel element, we inherited from the Label class, we will have our new StyledLabelRenderer class inherit from the LabelRenderer class. This will allow us to keep the existing functionality so we only have to override what we want to change or customize.
In order to apply our new formatting, we are going to need to jump into the rendering process and we do that via the OnElementChanged method. In this method, we can do all of our customizations.
When doing your customizations, there are two very important properties you will be using. First, you will need to get a reference to the original element that you created and that is being rendered in our custom renderer method. You do this by using the Element property. This is a generic object so you will have to cast this to whatever type you are rendering. In this case, it is a StyledLabel.
var styledLabel = (StyledLabel)Element;
The second important property you need is the Control property. This property contains a typed reference to the native control on the platform. In this case, since you have inherited from the LabelRenderer class, the code already knows that the Control in this case is a TextView.
From this point, you will use some simple logic to determine which customization to perform and apply the appropriate native customizations. In this case, you will use the Android mechanism for modifying the typeface of a TextView by using the SetTypeface method.
switch (styledLabel.Style)
{
case StyleType.Bold:
Control.SetTypeface(null, TypefaceStyle.Bold);
break;
case StyleType.Italic:
Control.SetTypeface(null, TypefaceStyle.Italic);
break;
case StyleType.BoldItalic:
Control.SetTypeface(null, TypefaceStyle.BoldItalic);
break;
}
If you were to run this application now, you should see something like the following in the Android Emulator, which is exactly what we aimed for.
Step 3: iOS Renderer
The process of creating the iOS renderer is exactly the same up until the point of overriding the OnElementChanged method. Begin by creating a new class in your Customization.iOS project. Name it StyledLabelRenderer and replace the contents with the following:
using Customization;
using Customization.iOS;
using MonoTouch.UIKit;
using Xamarin.Forms;
using Xamarin.Forms.Platform.iOS;
[assembly: ExportRenderer(typeof(StyledLabel), typeof(StyledLabelRenderer))]
namespace Customization.iOS
{
public class StyledLabelRenderer : LabelRenderer
{
protected override void OnElementChanged(ElementChangedEventArgs<Label> e)
{
base.OnElementChanged(e);
var styledLabel = (StyledLabel)Element;
switch (styledLabel.Style)
{
case StyleType.Bold:
Control.Font = UIFont.BoldSystemFontOfSize( 16.0f );
break;
case StyleType.Italic:
Control.Font = UIFont.ItalicSystemFontOfSize( 16.0f );
break;
case StyleType.BoldItalic:
Control.Font = UIFont.FromName( "Helvetica-BoldOblique", 16.0f );
break;
}
}
}
}
As you can see, everything is exactly the same. You have the same assembly attribute, you are overriding the same OnElementChanged method, you are casting the Element property to a StyledLabel, and you have the same shell of a switch statement to work through the Style property.
The only difference comes in where you are applying the styling to the native UILabel control.
switch (styledLabel.Style)
{
case StyleType.Bold:
Control.Font = UIFont.BoldSystemFontOfSize( 16.0f );
break;
case StyleType.Italic:
Control.Font = UIFont.ItalicSystemFontOfSize( 16.0f );
break;
case StyleType.BoldItalic:
Control.Font = UIFont.FromName( "Helvetica-BoldOblique", 16.0f );
break;
}
The way you make a UILabel's Font property either bold or italic in iOS is through a static helper method on the UIFont class named either BoldSystemFontOfSize or ItalicSystemFontOfSize. That will work in the case of either a bold font or an italic font, but not both. If you try to apply both of these to a UILabel, only the last one will render.
To get both styles, we will cheat a little and use a built-in font in iOS named Helvetica-BoldOblique. This font has both bold and italic built-in so we don't have to do them individually.
Running this in the iOS Simulator will give you the following result:
Step 4: Windows Phone Renderer
Finally, we come to Windows Phone. As you may have already guessed, the process is exactly the same. Create a new class in the Customization.WinPhone project, name it StyledLabelRenderer and replace the contents with the following:
using System.Windows;
using Customization;
using Customization.WinPhone;
using Xamarin.Forms;
using Xamarin.Forms.Platform.WinPhone;
[assembly: ExportRenderer(typeof(StyledLabel), typeof(StyledLabelRenderer))]
namespace Customization.WinPhone
{
public class StyledLabelRenderer : LabelRenderer
{
protected override void OnElementChanged( ElementChangedEventArgs<Label> e ) {
base.OnElementChanged( e );
var styledLabel = (StyledLabel) Element;
switch ( styledLabel.Style ) {
case StyleType.Bold:
Control.FontWeight = FontWeights.Bold;
break;
case StyleType.Italic:
Control.FontStyle = FontStyles.Italic;
break;
case StyleType.BoldItalic:
Control.FontStyle = FontStyles.Italic;
Control.FontWeight = FontWeights.Bold;
break;
}
}
}
}
Once again, everything is the same except for the logic. In this case, to make the text italic, you set the TextBlock's FontStyle property to Italic. Then to make the text bold, you set the FontWeight property to Bold. If you want to apply both, you simply set both.
Running this application in the Windows Phone emulator will give you the following result:
You have now successfully created a fully functional, customized, cross platform element that renders itself perfectly on all three platforms. You should now feel ready to take on the world. Well, almost.
The process that we have followed throughout this tutorial is completely valid and in most cases is going to work perfectly. There is a very specific case, though, in which we will be missing out on some functionality if we use that approach. That case is data-binding in XAML.
5. XAML and Data-Binding
One of the very cool features of Xamarin.Forms is the fact that you get to use XAML and data-binding just as you would if you were creating a Windows Phone, WPF, or Silverlight application. Unfortunately data-binding and XAML are beyond the scope of this tutorial, but I encourage you to read more about this topic on the XAML for Xamarin.Forms page.
Step 1: Building the XAML Page
Let's start by building a simple XAML page that duplicates the user interface we've previously created in code. Start by adding a new file to your Customizations (Portable) project, selecting the Forms XAML Page file type and giving it a name of StyledLabelPage.
Once the file is created, replace the contents with the following:
<?xml version="1.0" encoding="utf-8" ?><ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
xmlns:local="clr-namespace:Customization;assembly=Customization"
x:Class="Customization.StyledLabelPage"><StackLayout BackgroundColor="White" Spacing="100" Padding="100"><local:StyledLabel Text="I want to be italicized" TextColor="Black" HorizontalOptions="CenterAndExpand" Style="Italic" /><local:StyledLabel Text="I want to be bold" TextColor="Black" HorizontalOptions="CenterAndExpand" Style="Bold" /><local:StyledLabel Text="I want to be italicized and bold" TextColor="Black" HorizontalOptions="CenterAndExpand" Style="BoldItalic" /></StackLayout></ContentPage>
This XAML will create the exact same page that we have been working with before. Note the addition of the xmlns:local namespace declaration at the top of the file as well as the local: prefix before each reference to the StyledLabel objects. Without these, the XAML parser won't know what a StyledLabel is and ultimately won't be able to run.
In order to run this, you will need to make two small modifications. First, open the App.cs file and modify the GetMainPage method to look like this:
public static Page GetMainPage() {
return new StyledLabelPage();
}
Second, open the StyledLabelPage.xaml.cs file and change it to look like this:
public partial class StyledLabelPage : ContentPage
{
public StyledLabelPage()
{
InitializeComponent();
}
}
Now, when you run your applications, you should get the same results on all three platforms. Pretty neat, huh?
iOS
Android
Windows Phone
Step 2: Adding Data-Binding
If you are familiar with the concept of the Model View View-Model pattern (MVVM), you will know that one of its primary characteristics is data-binding. In fact, this pattern was designed around the use of XAML.
Data-binding is the process of allowing the properties of two objects to be linked together so that a change in one will create a change in the other. The process of data-binding within XAML is achieved through the use of the Binding Markup Extension.
Markup extensions are not a feature of Xamarin.Forms, or even of XAML. It is actually a feature of XML that allows additional functionality to be applied to the process of setting the value of an attribute in an element.
For example, let's give a closer look to the first StyledLabel element in the above example.
<local:StyledLabel Text="I want to be italicized" TextColor="Black" HorizontalOptions="CenterAndExpand" Style="Italic" />
The problem with this markup is that all of the properties (attributes) are being explicitly assigned. This creates a rather inflexible design. So what happens if for some reason during the execution of our application, we want to change the Style attribute to have a value of Bold? Well, in our code-behind file, we would need to watch for an event, catch that event, get a hold of this instance of the StyledLabel element and modify this attribute value. That sounds like a lot of work. Wouldn't it be nice if we could make that process easier? Well, we can.
Binding Markup Extension
The way that you are able to make this design more flexible for modification is through the use of the Binding markup extension. The way you use this extension is by modifying the markup to look like the following:
<local:StyledLabel Text="I want to be italicized" TextColor="Black" HorizontalOptions="CenterAndExpand" Style="{Binding FirstStyle}" />
As you can see, we've changed the value of the Style property to {Binding FirstStyle}. The use of a markup extension is typically signified by the use of curly braces {}. This means that whatever is contained inside the curly braces is going to be a markup extension.
In this case, we are using the Binding extension. The second part of this extension is the name of a property that we want to bind to this property (attribute). In this case, we will call it FirstStyle. That doesn't exist yet, but we will take care of that in a moment. First, let's completely update this file to take advantage of data-binding.
<?xml version="1.0" encoding="utf-8" ?><ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
xmlns:local="clr-namespace:Customization;assembly=Customization"
x:Class="Customization.StyledLabelPage"><StackLayout BackgroundColor="White" Spacing="100" Padding="100"><local:StyledLabel Text="I want to be italicized" TextColor="Black" HorizontalOptions="CenterAndExpand" Style="{Binding FirstStyle}" /><local:StyledLabel Text="I want to be bold" TextColor="Black" HorizontalOptions="CenterAndExpand" Style="{Binding SecondStyle}" /><local:StyledLabel Text="I want to be italicized and bold" TextColor="Black" HorizontalOptions="CenterAndExpand" Style="{Binding ThirdStyle}" /></StackLayout></ContentPage>
BindingContext
Since we are creating a binding, by definition we are trying to link this XAML attribute to something else that will allow these two properties to share their data. To do that, you will first need to create a class that contains properties with the same names that we are using in the XAML example above.
Create a new class within the Customizations (Portable) project and name it SampleStyles and replace the contents with the following:
public class SampleStyles
{
public StyleType FirstStyle { get; set; }
public StyleType SecondStyle { get; set; }
public StyleType ThirdStyle { get; set; }
}
This is a very simple class that contains three properties of type StyleType with the same names that we used in our Binding of the attributes. We now have the XAML using the Binding markup extension and a class that contains properties with the same name as we see in the bindings in the XAML. We just need glue to put them together. That glue is the BindingContext.
To link the properties of these objects together, we need to assign an instance of the SampleStyles class to the BindingContext property of StyledLabelPage. Open the StyledLabelPage.xaml.cs file and modify the constructor to look like the following:
public StyledLabelPage()
{
InitializeComponent();
BindingContext = new SampleStyles {
FirstStyle = StyleType.Italic,
SecondStyle = StyleType.Bold,
ThirdStyle = StyleType.BoldItalic
};
}
In theory, if you were to run your application, the XAML file would get populated with the values from our SampleStyles properties and everything would be rendered on the screen as we saw before. Unfortunately that is not the case. You wind up getting an exception at runtime that looks like this:
If you look at Additional information, you will see the problem is that No Property of name Style found. This is a result of the way we created the StyledLabel in the beginning. To take advantage of data-binding, your properties need to be of type BindableProperty. To do this, we will need to make a small modification to our StyledLabel class.
public class StyledLabel : Label {
public static readonly BindableProperty StyleProperty = BindableProperty.Create<StyledLabel, StyleType>( p => p.Style, StyleType.None );
public StyleType Style {
get { return (StyleType)base.GetValue( StyleProperty ); }
set { base.SetValue(StyleProperty, value);}
}
}
As you can see, we have added a static property named StyleProperty of type BindableProperty. We then assigned to it the result of a the CreateMethod that defines the owner of the property we are working with.
The property is Style, but the owner is StyledLabel. The second generic parameter is the return type of the property, which is a StyleType. Then the only argument we are supplying to the method is an expression that defines what is being returned and a default value. In our case, we are returning the value of the Style instance property and the default will be None, or no styling.
We then need to modify the Style property implementation to defer the getting and setting functionality to the base class so that the BindingProperty is updated properly if the value of the Style property changes.
Now, if you were to run your application again, you should see that everything is working as expected.
iOS
Android
Windows Phone
Conclusion
In this tutorial, you learned about a very important concept in the world of Xamarin.Forms, customization. Customization is one of the key features that allows them to stand out from the competition.
Knowing how, when, and where to customize is a very important skill to have as a mobile developer. I hope you find these skills useful and are able to put them to good use in your next project.
As you create applications with Xamarin.Forms, you will no doubt like the simplicity of creating user interfaces. Using Xamarin.Forms, you are able to use the same terminology for controls across multiple platforms.
While this concept can be very powerful, as a designer or a developer, it can be somewhat limiting. It may seem like we're forced to use the native user interface controls that come with each of the platforms without the ability to add customization. This is not the case.
In order to get into the process of customizing the user interface for specific platforms, you must first understand the rendering process of Xamarin.Forms.
2. Control Rendering
When it comes to using Xamarin.Forms to create a user interface for your cross platform mobile application, there are two important pieces to the puzzle that you must understand.
Element
The first piece of the puzzle is the element. You can think of an element as the platform agnostic definition of a control within Xamarin.Forms. If you have read through the documentation at all, you will know that these controls are also referred to as View objects. To be even more specific, every element within Xamarin.Forms derives from the View class.
These elements are used to describe a visual element. The elements provide a platform agnostic definition of characteristics of how the control should look and behave. An element on it's own can't actually create a control that is displayed to the user. It needs some help. This is where the second piece of the rendering process comes in, a renderer.
Renderer
A renderer comes into play when you run your application. The job of the renderer is to take the platform agnostic element and transform it into something visual to present to the user.
For example, if you were using a Label control in your shared project, during the running of your application, the Xamarin.Forms framework would use an instance of the LabelRenderer class to draw the native control. If you are starting to wonder how this happens from a shared code project, that's a very good question. The answer is, it doesn't.
Let's illustrate this with an example. Start by opening either Xamarin Studio or Visual Studio. The process and concepts are the same for both. If you are using Xamarin Studio, there is no support for Windows Phone projects so you will only create three projects in your solution. If you are using Visual Studio, you will create four projects.
In Visual Studio, create a new project and select the Mobile Apps project family on the left and choose the Blank App (Xamarin.Forms Portable) project template on the right. You can name your project anything you like, but if you wish to follow along with me, then use the name Customization, and the click OK.
Now, depending on your IDE, you should see either three or four projects in your solution. If you expand the References folder in your Customization (Portable) project, you will see an assembly reference to Xamarin.Forms.Core. This is where all the different elements are defined for you to use in your shared user interface project. Nothing out of the ordinary there.
If you open each of the platform specific projects and expand their References folders, you'll see that each one contains a platform specific implementation of that Xamarin.Forms library, named Xamarin.Forms.Platform.Android, Xamarin.Forms.Platform.iOS, and Xamarin.Forms.Platform.WP8 respectively.
It's in these assemblies that you'll find the renderers for each of the Xamarin.Forms elements. Now you are beginning to see the layout of the process. The platform agnostic elements, or View objects, are in the shared code project, but all the specific renderers for the elements are in the platform specific projects.
This means that for each of the elements you use, there will be two renderers created in Xamarin Studio, and three in Visual Studio. Now that you see how this is structured in Xamarin.Forms, the next logical question is usually, "When should I use customizations?".
3. When to Customize
There are definitely a good number of properties and characteristics that are defined within Xamarin.Forms elements that can be used to customize the final control on each of the platforms. Having said that though, not every customization available in each of the platforms exists in Xamarin.Forms. That being the case, there are two main scenarios when you will want to create customizations.
The first scenario when customizations will be needed is when you want to create a completely custom control. Let's say you wanted to create a calendar control or maybe some sort of graphing control. Unfortunately nothing like that exists today in Xamarin.Forms, which is not to say that it never will.
This is definitely a situation where you will need to start from square one and create everything from scratch. You will need to define the element you are going to use to describe the characteristics of the control in a platform agnostic manner. Then you will also need to create a custom renderer for each of the platforms you wish to support.
Depending on what you are building, this can be a rather extensive project. That being the case, I will save that for another tutorial in and of itself. Instead, in this tutorial, we will be focusing on the second scenario in which you will need some customization.
The second situation that you'll find yourself needing some customization is when a built-in element doesn't support a specific feature of a platform you wish to support. An example of this would be on the Label control. In Xamarin.Forms, there is no mechanism, or property, that allows you to create the equivalent on each of the platforms to make the text bold or italic. This may seem like a very simple scenario, but you will find that the basic process of making this change available in the element and have the renderer understand it will be the same here as in some of the more complex scenarios.
With the second scenario in mind, you have two options. You can either replace the existing renderer for a specific platform (or for all platforms) and create your own functionality and drawing logic for all the capabilities of the element. Alternatively, you can create your own element that derives from the existing element and associate that new element with a custom renderer. This way, you will retain all the default logic and rendering capabilities of the base element and customize it as you wish. This will be the route we take for this example. Now, let's see how to add this functionality to our own project.
4. Adding Customization
Let's start this process by setting up the basic structure of our application so we can see our baseline and then make changes. Start by opening your App.cs file in the Customization (Portable) project in the Solution Explorer. Modify the GetMainPage method to look like this:
public static Page GetMainPage() {
var iLabel = new Label {
TextColor = Color.Black,
Text = "I want to be italicized!",
HorizontalOptions = LayoutOptions.CenterAndExpand
};
var bLabel = new Label
{
Text = "I want to be bold!",
TextColor = Color.Black,
HorizontalOptions = LayoutOptions.CenterAndExpand
};
var bothLabel = new Label
{
Text = "I want to be italicized and bold!",
TextColor = Color.Black,
HorizontalOptions = LayoutOptions.CenterAndExpand
};
return new ContentPage
{
BackgroundColor = Color.White,
Content = new StackLayout {
Padding = 100,
Spacing = 100,
Children = { iLabel, bLabel, bothLabel }
}
};
}
As you can see here, we have created three simple Label controls. One wants to be italicized, one wants to be bold, and the third is greedy and wants to be both. If you were to run this application on iOS, Android, and Windows Phone, they would look something like this:
iOS
Android
Windows Phone
As you can see, they don't want to be this boring. Well, don't just sit there, help them out.
Step 1: Creating a New Element
The first thing we need to do is create a new element that we can use to provide additional customizations to the existing Label control. Start by adding a new class to your Customization (Portable) project and name it StyledLabel. Replace its contents with the following:
public enum StyleType {
Italic,
Bold,
BoldItalic
}
public class StyledLabel : Label
{
public StyleType Style { get; set; }
}
We define a very simple enumeration and class. We have defined the enumeration to allow for italic, bold, and bold plus italic values. We then create a class StyledLabelthat derives from the Labelbase class and add a new property, Style,to hold the appropriate style we want to apply to the control.
To make sure everything still works, and it should, let's modify the App.cs file once more and replace the Label elements in our first example with our new StyledLabel elements. Because the StyleLabel class inherits from the Label class, everything should still work.
public static Page GetMainPage() {
var iLabel = new StyledLabel {
TextColor = Color.Black,
Text = "I want to be italicized!",
HorizontalOptions = LayoutOptions.CenterAndExpand,
Style = StyleType.Italic
};
var bLabel = new StyledLabel
{
Text = "I want to be bold!",
TextColor = Color.Black,
HorizontalOptions = LayoutOptions.CenterAndExpand,
Style = StyleType.Bold
};
var bothLabel = new StyledLabel
{
Text = "I want to be italicized and bold!",
TextColor = Color.Black,
HorizontalOptions = LayoutOptions.CenterAndExpand,
Style = StyleType.BoldItalic
};
return new ContentPage
{
BackgroundColor = Color.White,
Content = new StackLayout {
Padding = 100,
Spacing = 100,
Children = { iLabel, bLabel, bothLabel }
}
};
}
Once again, here are the results of this change.
iOS
Android
Windows Phone
As you can see, nothing has changed. Now that we have a new custom element, it is time to create the custom renderers to take care of the native controls.
Step 2: Android Renderer
The first step to creating a renderer is to add a new class to the platform you are targeting. We will be starting with the Xamarin.Android project. Within this project, create a new class file and name it StyledLabelRenderer and replace its contents with the following:
using Android.Graphics;
using Customization;
using Customization.Droid;
using Xamarin.Forms;
using Xamarin.Forms.Platform.Android;
[assembly: ExportRenderer(typeof(StyledLabel), typeof(StyledLabelRenderer))]
namespace Customization.Droid
{
public class StyledLabelRenderer : LabelRenderer {
protected override void OnElementChanged( ElementChangedEventArgs<Label> e ) {
base.OnElementChanged( e );
var styledLabel = (StyledLabel)Element;
switch (styledLabel.Style)
{
case StyleType.Bold:
Control.SetTypeface(null, TypefaceStyle.Bold);
break;
case StyleType.Italic:
Control.SetTypeface(null, TypefaceStyle.Italic);
break;
case StyleType.BoldItalic:
Control.SetTypeface(null, TypefaceStyle.BoldItalic);
break;
}
}
}
}
We start with a special assembly attribute that tells Xamarin.Forms to use this StyledLabelRenderer class as the renderer every time it tries to render StyledLabel objects. This is required for your customizations to work properly.
Just like when we created a new StyledLabel element, we inherited from the Label class, we will have our new StyledLabelRenderer class inherit from the LabelRenderer class. This will allow us to keep the existing functionality so we only have to override what we want to change or customize.
In order to apply our new formatting, we are going to need to jump into the rendering process and we do that via the OnElementChanged method. In this method, we can do all of our customizations.
When doing your customizations, there are two very important properties you will be using. First, you will need to get a reference to the original element that you created and that is being rendered in our custom renderer method. You do this by using the Element property. This is a generic object so you will have to cast this to whatever type you are rendering. In this case, it is a StyledLabel.
var styledLabel = (StyledLabel)Element;
The second important property you need is the Control property. This property contains a typed reference to the native control on the platform. In this case, since you have inherited from the LabelRenderer class, the code already knows that the Control in this case is a TextView.
From this point, you will use some simple logic to determine which customization to perform and apply the appropriate native customizations. In this case, you will use the Android mechanism for modifying the typeface of a TextView by using the SetTypeface method.
switch (styledLabel.Style)
{
case StyleType.Bold:
Control.SetTypeface(null, TypefaceStyle.Bold);
break;
case StyleType.Italic:
Control.SetTypeface(null, TypefaceStyle.Italic);
break;
case StyleType.BoldItalic:
Control.SetTypeface(null, TypefaceStyle.BoldItalic);
break;
}
If you were to run this application now, you should see something like the following in the Android Emulator, which is exactly what we aimed for.
Step 3: iOS Renderer
The process of creating the iOS renderer is exactly the same up until the point of overriding the OnElementChanged method. Begin by creating a new class in your Customization.iOS project. Name it StyledLabelRenderer and replace the contents with the following:
using Customization;
using Customization.iOS;
using MonoTouch.UIKit;
using Xamarin.Forms;
using Xamarin.Forms.Platform.iOS;
[assembly: ExportRenderer(typeof(StyledLabel), typeof(StyledLabelRenderer))]
namespace Customization.iOS
{
public class StyledLabelRenderer : LabelRenderer
{
protected override void OnElementChanged(ElementChangedEventArgs<Label> e)
{
base.OnElementChanged(e);
var styledLabel = (StyledLabel)Element;
switch (styledLabel.Style)
{
case StyleType.Bold:
Control.Font = UIFont.BoldSystemFontOfSize( 16.0f );
break;
case StyleType.Italic:
Control.Font = UIFont.ItalicSystemFontOfSize( 16.0f );
break;
case StyleType.BoldItalic:
Control.Font = UIFont.FromName( "Helvetica-BoldOblique", 16.0f );
break;
}
}
}
}
As you can see, everything is exactly the same. You have the same assembly attribute, you are overriding the same OnElementChanged method, you are casting the Element property to a StyledLabel, and you have the same shell of a switch statement to work through the Style property.
The only difference comes in where you are applying the styling to the native UILabel control.
switch (styledLabel.Style)
{
case StyleType.Bold:
Control.Font = UIFont.BoldSystemFontOfSize( 16.0f );
break;
case StyleType.Italic:
Control.Font = UIFont.ItalicSystemFontOfSize( 16.0f );
break;
case StyleType.BoldItalic:
Control.Font = UIFont.FromName( "Helvetica-BoldOblique", 16.0f );
break;
}
The way you make a UILabel's Font property either bold or italic in iOS is through a static helper method on the UIFont class named either BoldSystemFontOfSize or ItalicSystemFontOfSize. That will work in the case of either a bold font or an italic font, but not both. If you try to apply both of these to a UILabel, only the last one will render.
To get both styles, we will cheat a little and use a built-in font in iOS named Helvetica-BoldOblique. This font has both bold and italic built-in so we don't have to do them individually.
Running this in the iOS Simulator will give you the following result:
Step 4: Windows Phone Renderer
Finally, we come to Windows Phone. As you may have already guessed, the process is exactly the same. Create a new class in the Customization.WinPhone project, name it StyledLabelRenderer and replace the contents with the following:
using System.Windows;
using Customization;
using Customization.WinPhone;
using Xamarin.Forms;
using Xamarin.Forms.Platform.WinPhone;
[assembly: ExportRenderer(typeof(StyledLabel), typeof(StyledLabelRenderer))]
namespace Customization.WinPhone
{
public class StyledLabelRenderer : LabelRenderer
{
protected override void OnElementChanged( ElementChangedEventArgs<Label> e ) {
base.OnElementChanged( e );
var styledLabel = (StyledLabel) Element;
switch ( styledLabel.Style ) {
case StyleType.Bold:
Control.FontWeight = FontWeights.Bold;
break;
case StyleType.Italic:
Control.FontStyle = FontStyles.Italic;
break;
case StyleType.BoldItalic:
Control.FontStyle = FontStyles.Italic;
Control.FontWeight = FontWeights.Bold;
break;
}
}
}
}
Once again, everything is the same except for the logic. In this case, to make the text italic, you set the TextBlock's FontStyle property to Italic. Then to make the text bold, you set the FontWeight property to Bold. If you want to apply both, you simply set both.
Running this application in the Windows Phone emulator will give you the following result:
You have now successfully created a fully functional, customized, cross platform element that renders itself perfectly on all three platforms. You should now feel ready to take on the world. Well, almost.
The process that we have followed throughout this tutorial is completely valid and in most cases is going to work perfectly. There is a very specific case, though, in which we will be missing out on some functionality if we use that approach. That case is data-binding in XAML.
5. XAML and Data-Binding
One of the very cool features of Xamarin.Forms is the fact that you get to use XAML and data-binding just as you would if you were creating a Windows Phone, WPF, or Silverlight application. Unfortunately data-binding and XAML are beyond the scope of this tutorial, but I encourage you to read more about this topic on the XAML for Xamarin.Forms page.
Step 1: Building the XAML Page
Let's start by building a simple XAML page that duplicates the user interface we've previously created in code. Start by adding a new file to your Customizations (Portable) project, selecting the Forms XAML Page file type and giving it a name of StyledLabelPage.
Once the file is created, replace the contents with the following:
<?xml version="1.0" encoding="utf-8" ?><ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
xmlns:local="clr-namespace:Customization;assembly=Customization"
x:Class="Customization.StyledLabelPage"><StackLayout BackgroundColor="White" Spacing="100" Padding="100"><local:StyledLabel Text="I want to be italicized" TextColor="Black" HorizontalOptions="CenterAndExpand" Style="Italic" /><local:StyledLabel Text="I want to be bold" TextColor="Black" HorizontalOptions="CenterAndExpand" Style="Bold" /><local:StyledLabel Text="I want to be italicized and bold" TextColor="Black" HorizontalOptions="CenterAndExpand" Style="BoldItalic" /></StackLayout></ContentPage>
This XAML will create the exact same page that we have been working with before. Note the addition of the xmlns:local namespace declaration at the top of the file as well as the local: prefix before each reference to the StyledLabel objects. Without these, the XAML parser won't know what a StyledLabel is and ultimately won't be able to run.
In order to run this, you will need to make two small modifications. First, open the App.cs file and modify the GetMainPage method to look like this:
public static Page GetMainPage() {
return new StyledLabelPage();
}
Second, open the StyledLabelPage.xaml.cs file and change it to look like this:
public partial class StyledLabelPage : ContentPage
{
public StyledLabelPage()
{
InitializeComponent();
}
}
Now, when you run your applications, you should get the same results on all three platforms. Pretty neat, huh?
iOS
Android
Windows Phone
Step 2: Adding Data-Binding
If you are familiar with the concept of the Model View View-Model pattern (MVVM), you will know that one of its primary characteristics is data-binding. In fact, this pattern was designed around the use of XAML.
Data-binding is the process of allowing the properties of two objects to be linked together so that a change in one will create a change in the other. The process of data-binding within XAML is achieved through the use of the Binding Markup Extension.
Markup extensions are not a feature of Xamarin.Forms, or even of XAML. It is actually a feature of XML that allows additional functionality to be applied to the process of setting the value of an attribute in an element.
For example, let's give a closer look to the first StyledLabel element in the above example.
<local:StyledLabel Text="I want to be italicized" TextColor="Black" HorizontalOptions="CenterAndExpand" Style="Italic" />
The problem with this markup is that all of the properties (attributes) are being explicitly assigned. This creates a rather inflexible design. So what happens if for some reason during the execution of our application, we want to change the Style attribute to have a value of Bold? Well, in our code-behind file, we would need to watch for an event, catch that event, get a hold of this instance of the StyledLabel element and modify this attribute value. That sounds like a lot of work. Wouldn't it be nice if we could make that process easier? Well, we can.
Binding Markup Extension
The way that you are able to make this design more flexible for modification is through the use of the Binding markup extension. The way you use this extension is by modifying the markup to look like the following:
<local:StyledLabel Text="I want to be italicized" TextColor="Black" HorizontalOptions="CenterAndExpand" Style="{Binding FirstStyle}" />
As you can see, we've changed the value of the Style property to {Binding FirstStyle}. The use of a markup extension is typically signified by the use of curly braces {}. This means that whatever is contained inside the curly braces is going to be a markup extension.
In this case, we are using the Binding extension. The second part of this extension is the name of a property that we want to bind to this property (attribute). In this case, we will call it FirstStyle. That doesn't exist yet, but we will take care of that in a moment. First, let's completely update this file to take advantage of data-binding.
<?xml version="1.0" encoding="utf-8" ?><ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
xmlns:local="clr-namespace:Customization;assembly=Customization"
x:Class="Customization.StyledLabelPage"><StackLayout BackgroundColor="White" Spacing="100" Padding="100"><local:StyledLabel Text="I want to be italicized" TextColor="Black" HorizontalOptions="CenterAndExpand" Style="{Binding FirstStyle}" /><local:StyledLabel Text="I want to be bold" TextColor="Black" HorizontalOptions="CenterAndExpand" Style="{Binding SecondStyle}" /><local:StyledLabel Text="I want to be italicized and bold" TextColor="Black" HorizontalOptions="CenterAndExpand" Style="{Binding ThirdStyle}" /></StackLayout></ContentPage>
BindingContext
Since we are creating a binding, by definition we are trying to link this XAML attribute to something else that will allow these two properties to share their data. To do that, you will first need to create a class that contains properties with the same names that we are using in the XAML example above.
Create a new class within the Customizations (Portable) project and name it SampleStyles and replace the contents with the following:
public class SampleStyles
{
public StyleType FirstStyle { get; set; }
public StyleType SecondStyle { get; set; }
public StyleType ThirdStyle { get; set; }
}
This is a very simple class that contains three properties of type StyleType with the same names that we used in our Binding of the attributes. We now have the XAML using the Binding markup extension and a class that contains properties with the same name as we see in the bindings in the XAML. We just need glue to put them together. That glue is the BindingContext.
To link the properties of these objects together, we need to assign an instance of the SampleStyles class to the BindingContext property of StyledLabelPage. Open the StyledLabelPage.xaml.cs file and modify the constructor to look like the following:
public StyledLabelPage()
{
InitializeComponent();
BindingContext = new SampleStyles {
FirstStyle = StyleType.Italic,
SecondStyle = StyleType.Bold,
ThirdStyle = StyleType.BoldItalic
};
}
In theory, if you were to run your application, the XAML file would get populated with the values from our SampleStyles properties and everything would be rendered on the screen as we saw before. Unfortunately that is not the case. You wind up getting an exception at runtime that looks like this:
If you look at Additional information, you will see the problem is that No Property of name Style found. This is a result of the way we created the StyledLabel in the beginning. To take advantage of data-binding, your properties need to be of type BindableProperty. To do this, we will need to make a small modification to our StyledLabel class.
public class StyledLabel : Label {
public static readonly BindableProperty StyleProperty = BindableProperty.Create<StyledLabel, StyleType>( p => p.Style, StyleType.None );
public StyleType Style {
get { return (StyleType)base.GetValue( StyleProperty ); }
set { base.SetValue(StyleProperty, value);}
}
}
As you can see, we have added a static property named StyleProperty of type BindableProperty. We then assigned to it the result of a the CreateMethod that defines the owner of the property we are working with.
The property is Style, but the owner is StyledLabel. The second generic parameter is the return type of the property, which is a StyleType. Then the only argument we are supplying to the method is an expression that defines what is being returned and a default value. In our case, we are returning the value of the Style instance property and the default will be None, or no styling.
We then need to modify the Style property implementation to defer the getting and setting functionality to the base class so that the BindingProperty is updated properly if the value of the Style property changes.
Now, if you were to run your application again, you should see that everything is working as expected.
iOS
Android
Windows Phone
Conclusion
In this tutorial, you learned about a very important concept in the world of Xamarin.Forms, customization. Customization is one of the key features that allows them to stand out from the competition.
Knowing how, when, and where to customize is a very important skill to have as a mobile developer. I hope you find these skills useful and are able to put them to good use in your next project.
One of the most popular new features introduced in iOS 8 is the ability to create several types of extensions. In this tutorial, I will guide you through the process of creating a custom widget for the Today section of the notification center. But first, let's briefly review some topics about extensions and understand the important concepts that underly widgets.
1. What Is an Extension?
An extension is a special purpose binary. It's not a complete app, it needs a containing app to be distributed. This could be your existing app, which can include one or more extensions, or a newly created one. Although the extension is not distributed separately, it does have its own container.
An extension is launched and controlled via its host app. It could be Safari, for example, if you're creating a share extension, or the Today system app that takes care of the notification center and other widgets. Each system area that supports being extended is called an extension point.
To create an extension, you need to add a target to the project of the containing app. The templates provided by Xcode already include the appropriate frameworks for each extension point, allowing the app to interact with and following the correct policies of the host app.
2. Today Extension Point
Extensions created for the today extension point, the so-called widgets, are meant to provide simple and quick access to information. Widgets link to the Notification Center framework. It's important that you design your widget with a simple and focused user interface, because too much interaction can be a problem. Note also that you don't have access to a keyboard.
Widgets are expected to perform well and keep their content updated. Performance is a big point to consider. Your widget needs to be ready quickly and use resources wisely. This will avoid slowing the whole experience down. The system terminates widgets that use too much memory, for example. Widgets need to be simple and focused on the content they are displaying.
That's enough theory for now. Let's start creating a custom today widget. The widget we're about to create will show information about disk usage, including a progress bar to provide a quick visual reference for the user. Along the way, we'll also cover other important concepts of iOS 8 extensions.
3. Target Setup
Step 1: Project Setup
If you want to build this widget as an extension to an existing app, go ahead and open your Xcode project, and jump to the second step. If you're starting from scratch just like me, then you first need to create a containing app.
Open Xcode and in the File menu select New > Project.... We will be using Objective-C as the programming language and the the Single View Application template to start with.
Step 2: Add New Target
Open the File menu and choose New > Target.... In the Application Extension category, select the Today Extension template.
You'll notice that the Project to which the target will be added is the project we're currently working with and the extension will be embedded in the containing application. Also note that the extension has a distinct bundle identifier based on the one of the containing application, com.tutsplus.Today.Used-Space.
Click Next, give your widget a name, for example, Used Space, and click Finish to create the new target. Xcode has created a new scheme for you and it will ask you to activate it for you. Click Activate to continue.
Xcode has created a new group for the widget named Space Used and added a number of files to it, a UIViewController subclass and a storyboard. That's right, a widget is nothing more than a view controller and a storyboard. If you open the view controller's header in the code editor, you'll notice that it is indeed subclassing UIViewController.
If you select the extension target from the list of targets, open the Build Phases tab, and expand the Link Binary With Libraries section, you'll see that the new target is linked to the Notification Center framework.
4. User Interface
We'll now build a basic user interface for our widget. Determining the widget size is important and there are two ways of telling the system the amount of space we need. One is using Auto Layout and the other is using the preferredContentSize property of the view controller.
The concept of adaptive layouts is also applicable to widgets. Not only do we now have iPhones with various widths (and iPads and future devices), but also remember that the widget might need to show its content in landscape orientation. If the user interface can be described with Auto Layout constraints, then that is a clear advantage for the developer. The height can be adjusted later with setPreferredContentSize: if needed.
Step 1: Adding Elements
Open MainInterface.storyboard in the Xcode editor. You'll notice that a label displaying "Hello World" is already present in the view controller's view. Select it and delete it from the view as we won't be using it. Add a new label to the view and align it to the right margin as shown below.
In the Attributes Inspector, set text color to white, text alignment to right, and the label's text to 50.0%.
Select Size to Fit Content from Xcode's Editor menu to resize the label properly if it's too small to fit its contents.
Next, add a UIProgressView instance to the left of the label and position it as shown below.
With the progress view selected, change the Progress Tint attribute in the Attributes Inspector to white and the Track Tint color to dark grey. This will make it more visible. This is looking good so far. It's time to apply some constraints.
Step 2: Adding Constraints
Select the percentage label and add a top, bottom, and trailing constraint as shown below. Be sure to uncheck the Constrain to margins checkbox.
Select the progress view and add a top, leading, and trailing constraint. Use this opportunity to change the leading space to 3 and don't forget to uncheck Constrain to margins.
Because we changed the value of the leading constraint of the progress view, we have a small problem that we need to fix. The frame of the progress view doesn't reflect the constraints of the progress view. With the progress view selected, click the Resolve Auto Layout Issues button at the bottom and choose Update Frames from the Selected Views section. This will update the frame of the progress view based on the constraints we set earlier.
Step 3: Build and Run
It's time to see the widget in action. With the Used Space scheme selected, select Run from the Product menu or hit Command-R. Reveal the notification center by swiping from the top of the screen to the bottom and tap the Edit button at the bottom of the notification center. Your widget should be available to add to the Today section. Add it to the Today section by tapping the add button on its left.
This is what our extension should look like.
That looks good, but why is there so much space below the progress view and label? Also, why didn't the operating system respect the leading constraint of the progress view?
Both issues are standard margins set by the operating system. We will change this in the next step. Note, however, that the left margin is desirable since it aligns the progress view with the widget's name.
If you rotate your device or run the application on a different device, you'll notice that the widget adjusts it size properly. That's thanks to Auto Layout.
Step 4: Fixing the Bottom Margin
Open TodayViewController.m in Xcode's editor. You'll notice that the view controller conforms to the NCWidgetProviding protocol. This means we need to implement the widgetMarginInsetsForProposedMarginInsets: method and return a custom margin by returning a UIEdgeInsets structure. Update the method's implementation as shown below.
Run the application again to see the result. The widget should be smaller with less margin at the bottom. You can customize these margins to get the result you're after.
Step 5: Connecting Outlets
Before moving on, let's finish the user interface by adding two outlets. With the storyboard file opened, switch to the assistant editor and make sure that it displays TodayViewController.m.
Hold Control and drag from the label to the view controller's interface to create an outlet for the label. Name the outlet percentLabel. Repeat this step and create an outlet named barView for the UIProgressView instance.
5. Displaying Real Data
We will use the NSFileManager class to calculate the device's available space. But how do we update the widget with that data?
This is where another method from the NCWidgetProviding protocol comes into play. The operating system invokes the widgetPerformUpdateWithCompletionHandler: method when the widget is loaded and it can also be called in the background. In the latter case, even if the widget is not visible, the system may launch it and ask for updates to save a snapshot. This snapshot will be displayed the next time the widget appears, usually for a short period of time until the widget is displayed.
The argument passed in this method is a completion handler that needs to be called when the content or data is updated. The block takes a parameter of type NCUpdateResult to describe if we have new content to show. If not, the operating system will know that there is no need to save a new snapshot.
Step 1: Properties
We first need to create some properties to hold the free, used, and total sizes. We will also add a property to hold the used space on the device. This allows us greater flexibility later. Add these properties to the class extension in TodayViewController.m.
@property (nonatomic, assign) unsigned long long fileSystemSize;
@property (nonatomic, assign) unsigned long long freeSize;
@property (nonatomic, assign) unsigned long long usedSize;
@property (nonatomic, assign) double usedRate;
Step 2: Implementing updateSizes
Next, create and implement a helper method, updateSizes, to fetch the necessary data and calculate the device's used space.
- (void)updateSizes
{
// Retrieve the attributes from NSFileManager
NSDictionary *dict = [[NSFileManager defaultManager]
attributesOfFileSystemForPath:NSHomeDirectory()
error:nil];
// Set the values
self.fileSystemSize = [[dict valueForKey:NSFileSystemSize]
unsignedLongLongValue];
self.freeSize = [[dict valueForKey:NSFileSystemFreeSize]
unsignedLongLongValue];
self.usedSize = self.fileSystemSize - self.freeSize;
}
Step 3: Caching
We can take advantage of NSUserDefaults to save the calculated used space between launches. The lifecycle of a widget is short so if we cache this value, we can set up the user interface with an initial value and then calculate the actual value.
This is also helpful to determine if we need to update the widget snapshot or not. Let's create two convenience methods to access the user defaults database.
Note that we use a macro RATE_KEY so don't forget to add this one at the top of TodayViewController.m.
// Macro for NSUserDefaults key
#define RATE_KEY @"kUDRateUsed"
Step 4: Updating the User Interface
Because our widget is a view controller, the viewDidLoad method is a good place to update the user interface. We make use of a helper method, updateInterface to do so.
The number of free bytes tends to change quite frequently. To check if we really need to update the widget, we check the calculated used space and apply a threshold of 0.01% instead of the exact number of free bytes. Change the implementation widgetPerformUpdateWithCompletionHandler: as shown below.
We recalculate the used space and, if it's significantly different from the previous value, save the value and update the interface. We then tell the operating system that something changed. If not, then there's no need for a new snapshot. While we don't use it in this example, there is also a NCUpdateResultFailed value to indicate that an error occurred.
Step 6: Build & Run
Run your application once more. It should now display the correct value of how much space is used by your device.
6. Recap
Let's review the lifecycle of your new widget. When you open the Today panel, the system may display a previous snapshot until it is ready. The view is loaded and your widget will retrieve a value cached in NSUserDefaults and use it to update the user interface.
Next, widgetPerformUpdateWithCompletionHandler: is called and it will recalculate the actual value. If the cached and new value are not significantly different, then we don't do anything. If the new value is substantially different, we cache it and update the user interface accordingly.
While in the background, the widget may be launched by the operating system and the same process is repeated. If NCUpdateResultNewData is returned, a new snapshot is created to display for the next appearance.
7. Adding More Information and Animation
Although we are already showing the used space, it would be interesting to have a precise number. To avoid cluttering the user interface, we will make our widget more interactive. If the user taps the percentage label, the widget expands, showing a new label with absolute numbers. This is also a great opportunity to learn how to use animation in widgets.
Step 1: Changing the User Interface
Open MainInterface.storyboard and select the percent label. In the Attributes Inspector, under the View section, find the User Interaction Enabled option and enable it.
Next, we need to remove the bottom constraint of the label. The distance of the label to the bottom of the view will change programmatically, which means the constraint would become invalid.
Select the label, open the Size area in the Size Inspector, select the bottom space constraint, and hit delete. You can also manually select the constraint guide in the view and delete it. The label now only has a top and trailing space constraint as shown below.
Select the view controller by clicking the first of the three icons at the top of the scene. In the Size area of the Size Inspector, set the height to 106.
Add a new label to the view and, as we did before, set its color to white in the Attributes Inspector. In addition, set the number of lines to 3, the height to 61, and the width 200. This should be enough to accommodate three lines of information. You also want it aligned to the bottom and left margins.
The last step is to open the assistant editor and create an outlet for the label named detailsLabel.
Step 2: Setup
The widget will only be expanded for a brief moment. We could save a boolean in NSUserDefaults and load it remembering the previous state, but, to keep it simple, every time the widget is loaded it will be closed. When tapping the percentage label, the extra information appears.
Let's first define two macros at the top of TodayViewController.m to help with the sizes.
In viewDidLoad, add two lines of code to set the initial height of the widget and to make the details label transparent. We will fade in the details label when the percentage label is tapped.
Note that we set the width of the widget to 0.0, because the width will be set by the operating system.
Step 3: Updating the Details Label
In the detail label, we show values for free, used, and total space available with the help of NSByteCountFormatter. Add the following implementation to the view controller.
To detect touches, we override the touchesBegan:withEvent: method. The idea is simple, whenever a touch is detected, the widget is expanded and the details label is updated. Note that the size of the widget is updated by calling setPreferredContentSize: on the view controller.
Even though the widget works fines, we can improve the user experience by fading the details label in while the widget expands. This is possible if we implement viewWillTransitionToSize:withTransitionCoordinator:. This method is called when the widget's height changes. Because a transition coordinator object is passed in, we can include additional animations.
As you can see, we change the alpha value of the details label, but you can add any type of animate that you feel enhances the user experience.
We are ready to run the application one more time. Give it a try and tap the percentage label to reveal the new details.
Conclusion
While all this logic might seem overly complex for such a simple task, you will now be familiar with the complete process to create a today extension. Keep these principles in mind when designing and building your widget. Remember to keep it simple and direct, and don't forget performance.
Caching here wouldn't be needed at all with these fast operations, but it is especially important if you have expensive processing to do. Use your knowledge of view controllers and check that it works for various screen sizes. It's also recommended that you avoid scroll views or complex touch recognition.
Although the extension will have a separate container, as we saw earlier, it is possible to enable data sharing between the extension and the containing app. You can also use NSExtensionContext's openURL:completionHandler: with a custom URL scheme to launch your app from the widget. And if code is what you need to share with your extension, go ahead and create a framework to use in your app and extension.
I hope the knowledge presented here comes in useful when building your next great today widget.