In today’s hyperconnected world, people want results fast. As mobile developers, we’re more aware of this than most. Our users don’t sit down in front of a desk. They’re on the go, running our apps while trying to walk, talk, and drive, so they expect snappy experiences. Get in, get out.
Multiple studies from Akamai, Google, and others have correlated website speed with user retention. And so far, evidence suggests people are at least as demanding when using native apps. In a survey of mobile users done by Apigee, the top complaint about mobile applications is freezes, and over 44% of the surveyed users said they would delete a slow performing app immediately.
Ask Facebook about the importance of fast mobile apps. When their stock bottomed out in the high teens, Mark Zuckerberg said that basing their app on HTML5 was the biggest mistake they made as a company. Why? Because it was slow. Within three weeks after the release of Facebook's new, faster native app, the application's rating had climbed from 1.5 stars to 4. Slow applications cause significant business pain. Lost users. Lost dollars.
What Could Possibly Go Wrong?
When first talking to developers about monitoring their app's performance in production, the most common response was "My app is already fast."
The trouble is, as the world of mobile fragments, it's hard to deliver a consistently fast experience. How does your app perform in China on an old phone and slow network? I’m willing to bet you have no idea. It’s certainly not the same as it performs on your brand new iPhone connected to your office Wi-Fi.
Performance is completely dependent on the context in which your application runs. Here’s a quick—but certainly not complete—list of performance gotchas:
Slow Networks
We're used to thinking of internet issues in terms of bandwidth limitations, but in cellular networks, latency is often the dominant factor. On a 3G network, it can take around 2.5 seconds to go from idle to connected before a single byte is transmitted. And Sprint says the average latency on their 3G network is 400ms. It doesn’t matter how fast your server processes a request if the response is slow getting to the phone.
Limited CPU
As geeks, we often develop using the latest and greatest, but most of the world, including massive markets you would like to penetrate, give up speed in order to achieve affordability. Our tests show that CPU bound code on an iPod 4G takes roughly four times longer than on an iPhone 5S. On Android, the disparity is even more significant.
Limited RAM
If your app uses too much memory, it’s killed by the operating system. To the user, this looks the same as a null pointer exception. Even if your code is squeaky clean without a single memory leak, your memory high water mark may lead to crashes on less powerful, but popular, phones in important markets.
Small Batteries
Batteries are one of the first things to get downsized when manufactures are trying to save space and money. But that won’t make users more understanding when your app drains all their power.
Built for Mobile
Let's say for a moment you're convinced that you need a fast application, and it should be fast everywhere, not just for you when you’re running your app through Apple’s Instruments CPU profiler. What is a developer to do? Right now, you have two basic options:
Option 1: Monitor Your Servers
A fast API means a fast app. Right? This is a web developer's mentality, and if you’re a mobile developer, it’s wrong.
The web is a thin client architecture. Setting aside JavaScript heavy web apps, most of the interesting code behind websites is running on the server. The client—the browser—is effectively just a stateless rendering engine. When performance declines, it's typically a scaling problem in your backend infrastructure.
Native, mobile apps, on the other hand, are thick clients. They have large, multi-threaded code bases. They maintain state. And they have to perform on a huge variety of handsets, operating systems, and networks. Your server team can still screw up the user's experience, but there's a whole new set of issues that aren't going to show up in your server alerts.
Option 2: QA the Hell Out of Your App
Fine. You get it. You need to make sure you test your apps in a bunch of real world scenarios. So you’re going to build a fancy QA lab with 100 different handsets. Then you’re going to enclose them in a faraday cage so you can simulate adverse network conditions, and hire an army of QA folks to run each new release through every possible action in your application.
I’ll admit, if you can afford it, this isn’t a bad idea. But the combinations quickly become overwhelming. Imagine you care about the top 100 phones, 10 network speeds, 20 different foreign markets with different latencies, and 5 different OS versions. Now imagine you have 50 distinct actions in your app. Ignoring the interdependency between the actions and varying user data, you have 1 million combinations to test. Ouch!
This is a classic QA problem. Quality assurance means doing your best to test the most common use cases. But it’s never meant to be a replacement for monitoring. It’s simply impossible to stay on top of all the possible failure cases.
A New Type of Tool
We need a new toolset, built from the ground up to specifically measure the performance issues of mobile apps. What metrics should these new tools capture?
Screen Freezes
Nothing annoys a user more than a frozen screen. By capturing each time your app takes above a time threshold to render a frame, you can get an idea of how often they see a noticeable freeze.
Spinner Time
If you follow good UI/UX practices, anytime you need to do work that’s going to take more than a few milliseconds, you should do it in the background and throw up a spinner. But even if you are on top of your threading, users still have limited patience.
After 1 second, users have a mental context switch, and after 10 seconds, users abandon their task. If you capture each time you show a spinner, you have a good generic indicator of how long the typical user is waiting on your app.
Memory Usage
Memory bugs are one of the hardest things to track down, especially since the Out of Memory killer on iOS doesn’t result in a stack trace. It’s too expensive to track every allocation, but recording resident memory on iOS or VM Heap use on Android are good, low overhead measurements.
Network Latency, Download Time, and Bandwidth
Latency and bandwidth are both highly variable on cellular networks, and play a key role in the user experience. For each API request, you can record how long it takes to get the initial response (latency), how long it takes to get the full response (download time), and bytes downloaded (bytes/download time equals bandwidth).
Battery
One of the few reasons I uninstall apps are high battery use. There are obvious battery sucks, like using the device's GPS, but there are other unexpected gotchas, like activating the wireless antenna too often. Both iOS and Android offer APIs for monitoring battery charge levels.
Context
In mobile, context is everything. When something goes wrong, you should at a minimum know application version, location, carrier network, version of the operating system, and device.
Introducing the Pulse.io SDK
Homegrown
If you're ambitious, you may have some homegrown performance instrumentation in your application. You probably have some basic timers for key actions in your app, then phone home the data via either a log or a specialized packet of JSON.
If so, pat yourself on the back. You’ve done far more than most. But there are many drawbacks to this approach. What if you have performance problems in unexpected places in your app? If you have a problem, how do you know what caused it? Was it a long method call, a slow API request, or too much data on the wire?
Analyzing Data
And once you get the raw performance data, how do you analyze and visualize it? If you write a one-off script, how often do you run it? And, God forbid, what happens if your performance instrumentation causes performance issues?
Pulse.io SDK
At Pulse.io, we've been hard at work for the past year building an SDK chock-full of monitoring goodness. We capture all of the metrics listed above while maintaining a very light footprint. We consume less than 3% of CPU, batch send our data to avoid turning on the radio, and limit our memory use by discarding low priority information.
The best part about Pulse.io is that it captures all of this stuff automagically. It’s one thing to manually instrument your app with your home grown solution. It’s another thing entirely to convince every engineer on your team to do so, and to apply the same instrumentation methodology consistently over time.
With Pulse.io, you just drop in the SDK and it automatically finds all the user interactions within your app and records when those interactions cause bad behavior like screen freezes or long asynchronous tasks.
Getting Started Monitoring Performance
Installing Pulse.io will take you less time than reading this article. We're currently in private beta, but if you shoot us an email at beta[at]pulse[dot]io and mention you read about us on Tuts+, we'll set you up with an account.
Once you’ve downloaded the SDK, installation is super simple. Drop the SDK into your app, add a few dependencies, and call [PulseSDK monitor:@"YOUR_APP_KEY"] within your app's main.m. You're done.
Conclusion
Hopefully I've convinced you of three things:
Slow apps lose users and therefore dollars.
Fast apps in development can be slow apps in production.
Existing tools don't do a good job monitoring real world app performance.
I encourage you to investigate your own app's real world performance. Give Pulse.io a try. There's not much to lose and a whole lot of performance to gain.
Security is becoming a bigger and bigger concern in the mobile space. As iOS developers, there are plenty of things we can do. We ensure sensitive information is saved in the keychain instead of plain text. We make sure content is encrypted before it's sent to a remote server. All this is done to make sure that the user's information is secure. Sometimes, however, we need to add an extra layer of protection at the user interface level.
Unless the user's device is enrolled in a mobile device management (MDM) solution, you cannot force your application's users to set up and use a passcode lock at the device level. ABPadLockScreen, however, provides a stylish, quick way to add such an interface to your iOS application. Let me show you how you can leverage ABPadLockScreen in your iOS applications.
1. Setup
ABPadLockScreen is available on GitHub, but I recommend installing it using CocoaPods. If you haven't started using CocoaPods for managing dependencies in your iOS and OS X projects, then you really should start today. It's the best way to manage dependancies in Cocoa projects. Since this tutorial isn't about CocoaPods I won’t go into the details of installing ABPadLockScreen using CocoaPods, but you can read plenty more about it at the CocoaPods website or read our introductory tutorial on Tuts+.
If you prefer to install ABPadLockScreen manually, then that's fine too. Download or clone the source code on GitHub and copy the files in the ABPadLockScreen folder into your Xcode project.
2. Pin Setup
The library includes two UIViewController subclasses. The ABPadLockScreenSetupViewController class is designed to allow the user to enter their initial pin. This is as simple as initializing a new instance of the view controller, passing a delegate, and presenting the view controller modally.
Setting up a pin isn't very useful unless the user gets a chance to enter it to gain access to the application. Once you’re ready to secure the application, all you need to do is present an instance of the ABPadLockScreenViewController class, assign a delegate, a pin, and present he view controller modally.
If you set the allowedAttempts property, the user will only have a predefined number of attempts before the module will lock them out. If allowedAttempts is not set, then the user can try entering a pin as many times as she want.
The delegate of the ABPadLockScreenViewController instance needs to conform to the ABPadLockScreenViewControllerDelegate protocol, which declares four delegate methods.
The methods are pretty self-explanatory. You can get a callback for a successful unlock, an unsuccessful entry, a cancellation—it that's allowed—, and if the user's attempts have expired reached the maximum allowed attempts.
4. Customization
There are several ways you can customize the interface of the lock screen and its behavior. You can:
enable/disable the cancel button
set the pin length, which is 4 by default
set a custom text for any of the labels, which is useful for localization
set the number of attempts, which default to 0 or an unlimited number of attempts
Aside from that, the user interface can also be customized very easily. The library uses the UIAppearance API for customizing the user interface of the lock screen. Everything, from the background, text color, selection color, and fonts, can be set to match your application't design.
Check out the view classes, ABPadLockScreenView, ABPadButton, and ABPinSelectionView, to see what the view names are.
Conclusion
In this quick tip, we've briefly covered how to make your iOS application a little more secure by adding a lock screen to its user interface. I hope you find the library useful and easy to use. Happy coding.
If you've ever worked with the Chrome Developer Tools, Safari's Web Inspector, then I don't have to convince you of their power and usefulness. Modern tools like the Chrome Developer Tools let you explore and manipulate the DOM of a web page while you're interacting with it.
The people at Itty Bitty Apps have taken that idea and brought it to iOS. The result is Reveal and it's impressive.
Reveal let's you inspect and manipulate the view hierarchy of an iOS application at runtime. It allows developers to make changes at runtime, which are pushed to the device or the iOS Simulator.
All you need to do is install Reveal on your development machine, include the Reveal library in your iOS application, and make sure your Mac and iOS application are on the same network. It's that simple.
2. Getting Started
1. Install Reveal
Reveal isn't free, but it has a 30-day trial. Visit Reveal's website, download a copy, and install it on your Mac.
2. Include Reveal Library
Before you can start working with Reveal, you need to include the Reveal library in your Xcode project.
With CocoaPods
CocoaPods makes this step very easy. Open your project's Podfile, add pod 'Reveal-iOS-SDK', and run pod update from the command line.
Without CocoaPods
The first step is to link your project against the Reveal library. You can find the location of the Reveal library by launching the Reveal application on your Mac and selecting Show Reveal Library in Finder from the Help menu. You also need to add the -ObjC flag to Other Linker Flags in your target's Build Settings.
If you're still using Xcode 4, then make sure to link your project against the CFNetwork and QuartzCore frameworks. This step isn't necessary if you're using Xcode 5.
3. Build and Run
Build your project and run your iOS application in the iOS Simulator or on a physical device. If you're running your iOS application on a physical device, then make sure the device is on the same network as the Mac Reveal is running on.
3. Inspecting View Hierarchy
User Interface
Reveal's user interface contains three sections:
On the left, you see the view hierarchy of your application's current state. At the very top, you should see the UIScreen object.
In the middle, you see your application's user interface with two controls at the top, zooming and perspective. The second control lets you switch between a 2D and 3D visualization. The 3D visualization is incredibly helpful if you're trying to find that one view that should be there but isn't.
The right pane is very similar to the one you find in Xcode. It contains a number of inspectors that display information about what you've currently selected in the view hierarchy on the left or in the middle.
Isolating Views
Seeing the view hierarchy of your application can be a bit overwhelming, especially if you're working with a collection or table view. You can collapse parts of the view hierarchy and you you can also zoom in on your application's user interface in the center view.
At times, you only want to focus on a collection of views, a table view cell, for example. You can isolate a group of subviews by double-clicking a view in the view hierarchy on the left or in the middle. You can also navigate your view hierarchy using the jump bar at the top of the window.
You can reload your application's view hierarchy by clicking the button at the top right of the window.
4. Manipulating View Hierarchy
Exploring your application's view hierarchy from multiple angles is great, but it doesn't stop there. One of the most powerful features of Reveal is its ability to manipulate views in the view hierarchy.
Select a view in the view hierarchy and edit its properties in the right pane. Reveal not only updates what you see in Reveal, it also pushes the changes to your device or the iOS Simulator. This works with any view in the view hierarchy.
5. A Word of Caution
Before you start experimenting with Reveal, it's important to know that Reveal should not be included in release builds. This is clearly stated on Reveal's website. If you forget to remove Reveal from release builds, your application will be rejected—that's a guarantee.
However, it's pretty easy to prevent this from happening by creating two targets, a development target that includes the Reveal library and a target for release builds that doesn't. This is a piece of cake if you use CocoaPods. Take a look at the following Podfile to see how this works.
platform :ios, '7.0'
pod 'AFNetworking', '~> 2.2'
pod 'CocoaLumberjack', '~> 1.8'
target :Development do
pod 'Reveal-iOS-SDK', '~> 1.0'
end
6. Inspecting Third Party Applications
I usually don't jailbreak my iOS devices, but Peter Steinberger convinced me with his post about inspecting third party applications. Read his post if you're curious to see how your fellow developers—or Apple—build iOS applications. Remember that jailbreaking an iOS device can cause permanent damage to the device. Jailbreaking an iOS device is not without risk.
Conclusion
Reveal has changed the way I debug user interface issues. The more I use it, the more I come to rely on it. Reveal isn't free, but it's more than worth its money. Take advantage of the 30-day trial and start exploring your iOS applications using this powerful tool.
The Android platform provides libraries you can use to stream media files, such as remote videos, presenting them for playback in your apps. In this tutorial, we will stream a video file, displaying it using the VideoView component together with a MediaController object to let the user control playback.
We will also briefly run through the process of presenting the video using the MediaPlayer class. If you've completed the series on creating a music player for Android, you could use what you learn in this tutorial to further enhance it. You should be able to complete this tutorial if you have developed at least a few Android apps already.
1. Create a New App
Step 1
You can use the code in this tutorial to enhance an existing app you are working on or you can create a new app now in Eclipse or Android Studio. Create a new Android project, give it a name of your choice, configure the details, and give it an initial main Activity class and layout.
Step 2
Let's first configure the project's manifest for streaming media. Open the manifest file of your project and switch to XML editing in your IDE. For streaming media, you need internet access, so add the following permission inside the manifest element:
Alter the parent layout to suit your own app if necessary. We give the VideoView instance an id attribute so that we can refer to it later. You may need to adjust the other layout properties for your own design.
Step 2
Now let's retrieve a reference to the VideoView instance in code. Open your app's main Activity class and add the following additional imports:
Now we can stream a video file to the app. Prepare the URI for the endpoint as follows:
String vidAddress = "https://archive.org/download/ksnn_compilation_master_the_internet/ksnn_compilation_master_the_internet_512kb.mp4";
Uri vidUri = Uri.parse(vidAddress);
You will of course need to use the remote address for the video file you want to stream. The example here is a public domain video file hosted on the Internet Archive. We parse the address string as a URI so that we can pass it to the VideoView object:
vidView.setVideoURI(vidUri);
Now you can simply start playback:
vidView.start();
The Android operating system supports a range of video and media formats, with each device often supporting additional formats on top of this.
As you can see in the Developer Guide, video file formats supported include 3GP, MP4, WEBM, and MKV, depending on the format used and on which platform level the user has installed.
Audio file formats you can expect built-in support for include MP3, MID, OGG, and WAV. You can stream media on Android over RTSP, HTTP, and HTTPS (from Android 3.1).
4. Add Playback Controls
Step 1
We've implemented video playback, but the user will expect and be accustomed to having control over it. Again, the Android platform provides resources for handling this using familiar interaction via the MediaController class.
In your Activity class's onCreate method, before the line in which you call start on the VideoView, create an instance of the class:
MediaController vidControl = new MediaController(this);
Next, set it to use the VideoView instance as its anchor:
vidControl.setAnchorView(vidView);
And finally, set it as the media controller for the VideoView object:
vidView.setMediaController(vidControl);
When you run the app now, the user should be able to control playback of the streaming video, including fast forward and rewind buttons, a play/pause button, and a seek bar control.
The seek bar control is accompanied by the length of the media file on the right and the current playback position on the left. As well as being able to tap along the seek bar to jump to a position in the file, the streaming status is indicated using the same type of display the user will be accustomed to from sites and apps like YouTube.
As you will see when you run the app, the default behavior is for the controls to disappear after a few moments, reappearing when the user touches the screen. You can configure the behavior of the MediaController object in various ways. See the series on creating a music player app for Android for an example of how to do this. You can also enhance media playback by implementing various listeners to configure your app's behavior.
5. Using MediaPlayer
Step 1
Before we finish, let's run through an alternative approach for streaming video using the MediaPlayer class, since we used it in the series on creating a music player. You can stream media, including video, to a MediaPlayer object using a surface view. For example, you could use the following layout:
Finally, in the onPrepared method, start playback:
mediaPlayer.start();
Your video should now play in the MediaPlayer instance when you run the app.
Conclusion
In this tutorial, we have outlined the basics of streaming video on Android using the VideoView and MediaPlayer classes. You could add lots of enhancements to the code we implemented here, for example, by building video or streaming media support into the music player app we created. You may also wish to check out associated resources for Android such as the YouTube Android Player API.
As interesting as web applications are, they are not the only game in town. These days, mobile applications are a massive part of the software development landscape. Just like with web apps, we want our mobile application code to be performant.
Fortunately, in the last year or two, New Relic has focused hard on building out a solution for monitoring the performance of your mobile apps. Today we will look at how you can start using New Relic to monitor the performance of an Android application.
Why Monitor Mobile Apps At All?
The great thing about building a web app is that you can always deploy a new version, instantly forcing your whole user base to use your new code. So if you weren't monitoring your code before, you can easily hook up New Relic or hack up something custom, push it out, and start getting metrics within a few minutes.
With mobile apps, you're not so fortunate. You can, of course, release a new version any time you want, but the process is potentially longer—app store approval, for example. And even when your new version is out there, you can't force your users to upgrade. It's therefore important to think about any kind of monitoring you might want to do before you ever release the first version of your app.
Even if you don't need to worry about the performance of your app for a while, once you do, your monitoring solution will already be in place, you just need to start interpreting the metrics.
In addition, it's a rare mobile app these days that doesn't also have a web component to it. Just about every application these days makes HTTP requests to an API—and often many different APIs.
As we know, network calls are not always the most reliable things. It would be great if we could find out how often API calls fail for our users and, more importantly, how slow our API calls are on average. It's the only way to know if our users are having a good experience with our application or if they are being frustrated by lag.
If you're not monitoring your application you can only guess about this kind of stuff. I don't know about you, but I am usually much more comfortable with cold hard data.
There are many other important questions that a good monitoring solution can help us answer, but we can cover those as we're working with our Android application, so let's get cracking.
Building A Basic Android App
Normally, for an introductory article like this one, I like to focus on the subject at hand—in this case New Relic for mobile—and keep the rest of the code as Hello World as possible.
It's easy to build a Hello World Android app, Google even has a tutorial about it. Unfortunately, that app is just a little too basic. It makes no network calls, which means we wouldn't be able to look at a large part of what New Relic offers for mobile app monitoring. So, we'll slightly modify our basic app.
Our app will have two screens, on the first screen we will be able to enter a Twitter handle and submit it. At this point our app will go to the second screen and display some placeholder text. In the meantime our application will go off to Twitter and fetch the latest tweet for that handle. Once the tweet is available, we will update the second screen to display it. The app is still pretty basic, but hopefully it is complex enough that we'll be able to get some interesting data from New Relic.
I'm not going to walk through setting up the whole application, but here are the interesting parts. As per the Google tutorial, when we press the button on the first screen, it will pass along the value of the text field to the second screen, but in our case it will be a Twitter handle:
On the second screen, we want to fetch the latest tweet for that handle. But we can't do it on the UIThread, we need an AsyncTask. We'll create one and kick it off inside the onCreate method of the second activity:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_display_message);
setupActionBar();
String handle = getIntent().getStringExtra(MainActivity.EXTRA_MESSAGE);
TextView textView = new TextView(this);
textView.setTextSize(40);
new FetchLatestTweetTask(textView, handle).execute();
// Set the text view as the activity layout
setContentView(textView);
}
The actual task looks like this:
public class FetchLatestTweetTask extends AsyncTask<Void, Void, String> {
private TextView textView;
private String handle;
public FetchLatestTweetTask(TextView textView, String handle) {
this.textView = textView;
this.handle = handle;
}
@Override
protected String doInBackground(Void... args) {
Twitter twitter = new TwitterFactory().getInstance();
String status = null;
try {
User user = twitter.showUser(handle);
status = user.getStatus().getText();
} catch (Exception e) {
e.printStackTrace();
}
return status;
}
protected void onPreExecute() {
textView.setText(String.format("Fetching tweet by @%s ...", handle));
}
protected void onPostExecute(String tweet) {
textView.setText(tweet);
}
}
We display some placeholder text before fetching the tweet and update the placeholder text with the tweet's content after we've fetched it. We use Twitter4J to talk to the Twitter API. In order for the API library to work, I've dumped a twitter4j.properties file in the /src folder of the project so that it ends up on the classpath as per the documentation.
The properties file contains the OAuth consumer key, consumer secret, access token key, and access token secret for the Twitter app that I set up just for this.
This is all the interesting code in our application, the rest is just generic boilerplate as per the introductory Google tutorial.
Setting Up New Relic For You App
Setting up New Relic to start monitoring your Android app is very easy. In your New Relic account, click on Mobile in the menu. This is where all your mobile apps will live, just like the web apps live under the Applications menu item.
Now click the Add a new app button:
This will take you to another screen where New Relic will walk you through setting up a new app:
We click on Android and give our app a name. Once you've given your app a name, you need to press Continue so that New Relic generates a new API key for your application.
Next, we need to install the New Relic agent. I'm using Eclipse so I go to Help > Install New Software... and add New Relic as a site:
Click Next and wait for Eclipse to do its thing. Once it's done, you need to restart Eclipse. At this point, you should be able to right-click your project in Eclipse and there should be an Install New Relic menu option. When we click it, the New Relic agent jar will end up in the /libs folder of our project.
Incidentally, if a new version of the New Relic agent comes along, you update it in the same way. First, do Help > Check for Updates to get the latest updates. After that, just right-click your project and there should be an Update New Relic menu option, which will update the New Relic jar when clicked:
Now we need to give our app permissions for INTERNET and ACCESS_NETWORK_STATE as New Relic will need to send data back to their servers. Our AndroidManifest.xml will look like this:
Note the application token. If you pressed Continue when you gave your application a name, this should already be pre-filled for you. Once your app is up and running, you can always look it up again in the Settings menu for your application.
After this step, we build the project and deploy it to an emulator or a physical device. I prefer to deploy to a test device as I find it to be faster, more responsive, and easier to work with. I will use my Nexus 4.
If we look at the LogCat tab when the application is deploying, we should see output similar to this:
This is New Relic calling home to send data. If we now go back to the New Relic user interface we should start seeing data.
Exploring the Dashboards
When you go to look at your app in New Relic, you will first hit the Overview screen. Similar to the web application overview screen, it displays several important metrics about your app such as Http response time, Slowest Interactions, etc.
The activity on those graphs is sporadic since we only have one client sending back data and we've only done a couple of interactions.
So what are some of the more interesting things that you can see in New Relic for your mobile app? Well, there is the App > Devices tab that shows you which devices people are using your app on. This is interesting since you can tell at a glance what sort of phones/tables most of your user base is using. Are people mostly on older devices or newer ones? Are they mostly on tablets or phones? This is valuable data.
You can drill down into each device and see how well your app is doing there. Is the interaction time for that device slower than what you would expect? What about the Http response time? How many active users are currently using your app on this type of device? In our case:
There is only one device, so there isn't that much to see. But if a large percentage of your user base was on a device where your app wasn't performing very well, you would see it straight away and be able to address the issue.
Similar to the Devices tab, there is the OS versions tab, which breaks down the usage of your app by the version of Android that your users have installed:
You can tell if you need to focus more of your attention on newer versions of Android or if most of your user base is still on an older version.
Then there's the Network tab and its children. In the Map tab, you can see which APIs your app connects to and how well each one of them is doing. What's the throughput, response time, and error rate:
In our case, we only have the Twitter API and it's pretty slow actually. Maybe we might consider caching some of the responses for a period of time.
In the Networks > Http requests tab, we can drill down into each endpoint of every API that we use in a similar way to how we drill down into devices and OS versions. We can find out which endpoints are used most and which are the slowest. This gives us some solid leads regarding where to direct our optimization efforts. This is especially true if we also control the APIs that are being used.
In the Network > Geography tab, you can tell where most of your users are coming from and in the Carriers tab you can see what kind of internet connection your users have. In our case, I am on Wi-Fi:
It's very valuable to know if your user base is using Wi-Fi, 3G, or 4G as your optimization efforts can be completely different depending on the breakdown.
Under Settings > Alerts, you can also define some conditions for your external APIs for New Relic to notify you if response times exceed a certain threshold or if error rates go above a certain percentage.
This is potentially less valuable for APIs you don't control, but still a good indicator if an API you're using is unstable or not very performant.
The last two interesting ones are Usage > Versions and Usage > Monthly Uniques. The first one shows you which versions of your app are being used in the wild. This allows you to tell how eagerly users download updates of your app. It also shows you how well each version of your app is performing on the device. Is the new version using more memory than the previous version?
The monthly uniques basically gives you an idea if people are actually interacting with your app. You may have 10 million downloads, but if the number of monthly uniques is low, then things aren't as great as they seem to be.
Conclusion
This is a basic overview of some—but not all—interesting features of New Relic for Android apps. In and of themselves, none of the features are mind blowing, but it is good solid data that, for a mobile app, you can't get any other way.
How your app is being used and on which devices, how well your network calls are performing on a slow connection, this is the type of data that forces you to stop guessing and make informed decisions about how to improve your app and give your users a better experience.
Remember, performance is just as important for mobile apps as it is for web apps, and there's no reason to guess about what's making your app slow when there's a much better way readily available.
David Smith is an independent software developer focusing primarily on Apple's iOS platform. David's first experience with mobile development dates back to the early 2000s when he created apps for Palm and Windows Mobile.
With his company Cross Forward Consulting, he has released a wide range of mobile apps, such as Audiobooks, Check the Weather, and Pedometer++. He also runs Feed Wrangler, a popular RSS service David launched shortly after Google shut down Google Reader.
David is well-known in the iOS community for several reasons. He hosts a wonderful podcast, Developing Perspective and he frequently shares his knowledge and experiences on his website.
In today's interview, I talk with David about running a business in the App Store, the importance of income diversification, and the challenges of being an indie developer.
Can you tell us a bit about your background and how you got started with iOS development?
My career as a developer started actually in mobile, but it was back in the early 2000s. Back then I used to write apps for the Palm and later Windows Mobile platforms. It was mobile, but not in the way that we really consider it today.
I did that for a while and then I got into web development, Ruby on Rails for the most part, and into iOS development, mostly because it seemed like the next big thing. It’s been quite a ride ever since.
Do you think your experience with Palm and Windows Mobile gave you a head start when the iPhone was introduced in 2007?
I think it helped. At this point, after so many years and after the platform has evolved as much as it has, I think that difference is less significant. But I think, in that first year, it did help that I had spent a lot of time writing apps for small screens with very low screen resolutions.
It helped me to be more thoughtful about what I can fit on the screen and have a better understanding of what that context feels like as a user. I’d spend hours and hours using these small mobile devices—even before I got my first iPhone.
What motivated you to get into iOS development?
I think I’ve always had an entrepreneurial bent. I’ve always wanted to try and find something I could do to start a business and make it on my own—rather than working for somebody else.
Up until the iPhone launched, it had never really been something that I was able to do on a product-side. At that point, I was a consultant. I transitioned from a typical 9-5 job into a more work-for-hire job.
When the iPhone SDK come out, it was something that seemed like it had a higher probability of me being able to make a run at it. And it turned out that that was correct.
Almost a year ago, you launched Feed Wrangler, a web service replacing Google's Reader. Feed Wrangler was launched shortly after Google shut down Reader. What inspired you to create Feed Wrangler?
It was interesting going back to my roots and going back to a past life where I used to be a Rails developer. I had a lot of experience building web applications, but those skills had fallen a little bit to the wayside in that my focus had become building iOS apps.
I knew that Google Reader was most likely not long for this world and I thought maybe I’ll try to build something for myself to use when that time comes. When Google announced they were shutting down Google Reader, I thought "Why don’t I see if I can turn this into a product? Why don’t I take a run at this?"
What were the most challenging aspects of developing and releasing Feed Wrangler?
It turned out that the most challenging part of building Feed Wrangler wasn’t necessarily the code part of it, it was the scale-side. All of my summer was spent trying to pull the system together as it had an increasing amount of users and traffic. The amount of data that it’s trying to index and manage is rather considerable and was more than I ever really thought about when I was initially building it.
The biggest challenge was having to move all the way down the stack. Before Feed Wrangler, I was so used to working only at the very high level of writing code, deploying it somewhere, and then it sort of ran and worked. With Feed Wrangler, having to really worry about how fast I’m able to write data to a hard drive on my database server was a very important thing.
I’m glad that things have settled down because that was a pretty rough summer. In the App Store, most products have a natural ramp-up period where you create something, you put it out there, and you’re trying to build buzz gradually over time. With Feed Wrangler, the Google Reader boat was sinking and everyone was jumping off and trying to find an alternative. That was a very sudden, rapid ramp-up in terms of users and use.
With Feed Wrangler, you've created a source of recurring revenue for your company. Has this changed the way you run your business? Has it given you room to experiment with other projects?
Yes. I definitely think it has. Something that I’ve done over and over again as I’ve been trying to build my business is to diversify the sources of revenue that my business has, so I can take more risks or be more aggressive in the things that I’m trying.
I have free apps with ads, I have free apps with in-app purchases, I have paid apps, and now I have a subscription-based product. Having such a diverse stream of income allows me to take more risks. New things don't have to pay off right away.
Pedometer++ is a good example. It started off as a proof of concept that I put out in the App Store. It got a lot more interest than I expected. I’ve been able to invest in it and it has a pretty wide audience now and is doing very well. Thanks to my other products, I had the time to adapt, tweak, and change it over as it went, even when it was initially generating almost no revenue for me and was nothing more than a hobby.
In the App Store, it's difficult to predict when a product has potential and when it hasn’t. After five years of doing business in the App Store, have you developed a sixth sense that helps with this challenge?
I definitely wouldn’t say that I have a sixth sense about it, but my gut instinct on things is probably a bit more refined than it used to be. I’ve launched significantly more flops than I have successes.
If I think back, over the five years that I’ve been doing this, I probably launched something in the range of fifty to sixty different products, ideas, or concepts, and I’ve probably only ever had five or six of them pan out.
It seems to be about one in eight or ten products that actually work out in a way that is worth pursuing. I have a better sense now of the kinds of areas where it makes sense to invest time and energy, and I think that has a lot to do with understanding your competition and understanding where you’re going to be competing.
If you’re going to build a weather app, which is something that I’ve done with Check the Weather, it's important to understand that it’s unlikely that you’re going to turn the market on its head and become the dominant leader. You're competing against hundreds, if not thousands, of other applications.
You have to have the right mentality and understand that everyone is always looking for a new weather app, but also that they're going to be looking for a new weather app right after they get yours.
You can't really predict whether something's going to succeed or not, but I think you can have a reasonable understanding of what your best case and worst case scenarios are. And if you’re honest about those, I think you’re able to make much better decisions about what you do and how much money and time you invest into something.
Especially in the App Store, there’s a lot of space for products that solve a specific problem and solve it well. But don’t try and do too much too soon. It’s far better to release something that does something unique and interesting rather than solving every problem.
If your app takes off and succeeds, you’re going to have many opportunities to continue to invest in it in the future. That's a safer approach than putting all that time and money in upfront and not necessarily knowing if it’s going to pan out.
Marketing mobile applications isn't easy, because you have a very small margin to work with. What strategies do you use for marketing mobile applications?
Marketing is an area that I always wish I had a better answer for. In my experience, there are very few forms of paid advertising that really pay off. I’ve never found them to really work well.
The most effective marketing seems to be to try and develop relationships with people in the press. It's important to develop relationships with them before you need them to take a look at what you’re building. Your hope is that your app is shown in one of those venues, because ultimately you’re trying to build awareness.
If your app is good and it has that spark that’s going to draw people’s attention, once you have that initial bit of interest, then it’s up to your app to market itself. If people see it and they like it, they’re going to tell their friends about it, they’re going to talk about it online. That kind of word of mouth advertising and marketing seems to be the most successful.
Most of my efforts are on trying to get that initial bit of buzz, that initial bit of press, and then letting go and seeing where it goes. Anytime I’ve tried after that initial push to continue to have things happen, such as advertising or continuing to reach out to people in the press, it doesn’t work nearly as well.
You also have to understand that not every app is going to be successful. A lot of people who listen to Developing Perspective get very frustrated about this. They've spent all this time and energy building an app, they put it out there, and it didn’t go anywhere. They then ask me "What kind of marketing can I do for it?" The hard answer sometimes is that there might not be anything you can do. You may have misjudged the market or there’s something about your app that’s very narrow in focus—narrower than you thought.
There's no silver bullet like "If you do these five things, then your app will be successful." The quality of your application and its design are the best marketing that you’ll ever do.
On top of building mobile apps and running a web service, you also host a podcast, Developing Perspective. What is your goal with Developing Perspective?
Developing Perspective is a podcast I’ve done for almost three years. It's about the lessons I learned from being an independent iOS developer. Unless it's an interview, the podcast is limited to fifteen minutes.
For a long time, I've been a huge fan of podcasts. When I was creating Developing Perspective, I listened to all the 5by5 shows like Build and Analyze, Hypercritical, The Talk Show, and I really loved the podcast format. But I was looking at it and it was very intimidating for me to sit down and look at something that was maybe an hour, hour and a half long, and do that on an ongoing basis.
So I thought "Why don't I just take a constraint and put it on top of it and say it's never going to be longer than fifteen minutes." It was good for the listeners. I got a lot of feedback. The people loved that they could always squeeze in Developing Perspective, because it’s only fifteen minutes long, so it’s not something that they have to sit down for and devote a lot of time to.
It’s something I’ve been able to do now for so long because it only takes me about half an hour to do an episode. It’s something that I enjoy doing and where I feel like I can help people. I'm always struggling with some new problem and sharing that, even if it’s not the solution, the workarounds and the hacks that I’ve found, seems to really help other people as well.
Thank you so much for your time, David. Where can people find or follow you online?
In this tutorial, I'll show you how to take advantage of the new 2D Tools included in Unity to create a 2D Game.
1. Application Overview
In this tutorial, you'll learn how to create a Unity 2D project and create a mobile game using C# and Unity.
The objective of the game is to shoot a teleporting ray at the cows before they can reach the safety of the barn.
In this project you will learn the following aspects of Unity development:
setting up a 2D project in Unity
becoming familiar with the Unity interface
creating a Prefab
attaching scripts to game objects
working with physics collisions
using timers
2. Create a New Unity Project
Open Unity and select New Project from the File menu to open the new project dialog. Select a directory for your project and setSet up defaults for to 2D.
3. Build Settings
In the next step, you're presented with Unity's interface. Set the project up for mobile development by choosing Build Settings from the File menu and selecting your platform of choice.
Unity can build for iOS, Android, BlackBerry, and Windows Phone 8, which is great if you plan to create a mobile game for multiple platforms.
4. Devices
Since we're about to create a 2D game, the first thing we need to do after selecting the platform we're targeting, is choosing the size of the artwork that we'll use in the game.
iOS:
iPad without Retina: 1024px x 768px
iPad with Retina: 2048px x 1536px
3.5" iPhone/iPod Touch without Retina: 320px x 480px
3.5" iPhone/iPod with Retina: 960px x 640px
4" iPhone/iPod Touch: 1136px x 640px
Because Android is an open platform, there are many different devices, screen resolutions, and pixel densities. A few of the more common ones are listed below.
Asus Nexus 7 Tablet: 800px x 1280px, 216 ppi
Motorola Droid X: 854px x 480px, 228 ppi
Samsung Galaxy SIII: 720px x 1280px, 306 ppi
And for Widows Phone and BlackBerry:
Blackberry Z10: 720px x 1280px, 355 ppi
Nokia Lumia 520: 400px x 800px, 233 ppi
Nokia Lumia 1520: 1080px x 1920px, 367 ppi
Even though we'll be focusing on the iOS platform in this tutorial, the code can be used to target any of the other platforms.
5. Export Graphics
Depending on the device you're targeting, you may need to convert the artwork to the recommended size and pixel density. You can do this in your favorite image editor. I've used the Adjust Size... function under the Tools menu in OS X's Preview application.
6. Unity Interface
Make sure to click the 2D button in the Scene panel. You can also modify the resolution that's being used to display the scene in the Game panel.
7. Game Interface
The user interface of our game will be simple. You can find the artwork for this tutorial in the source files of this tutorial.
8. Language
You can use one of three languages in Unity, C#, UnityScript, a language similar to JavaScript in terms of syntax, and Boo. Each language has its pros and cons, but it's up to you to decide which one you prefer. My preference goes to the C# syntax, so that's the language I'll be using in this tutorial.
If you decide to use another language, then make sure to take a look at Unity's Script Reference for examples.
9. 2D Graphics
Unity has built a name for being a great platform for creating 3D games for various platforms, such as Microsoft Xbox 360, Sony PS3, Nintendo Wii, the web, and various mobile platforms.
While it has always been possible to use Unity for 2D game development, it wasn't until the release of Unity 4.3 that it included native 2D support. We'll learn how to work with images as sprites instead of textures in the next steps.
10. Sound Effects
I'll use a number of sounds to improve the game experience. The sound effects used in this tutorial can be found at Freesound.org.
11. Import Assets
Before we start coding, we need to add our assets to the Unity project. Theres are several ways to do this:
select Import New Asset from the Assets menu
add the items to the assets folder in your project
drag and drop the assets in the project window
After completing this step, you should see the assets in your project's Assets folder in the Project panel.
12. Create Scene
We're ready to create the scene of our game by dragging objects to the Hierarchy or Scene panel.
13. Background
Start by dragging and dropping the background into the Hierarchy panel. It should then appear in the Scene panel.
Because the Scene panel is set to display a 2D view, you'll notice selecting the Main Camera in the Hierarchy shows a preview of what the camera is going to display. You can also see this in the game view. To make the entire scene visible, change the Size value of the Main Camera to 1.6 in the Inspector panel.
14. Ship
The ship is also a static element the player won't be able to interact with. Position it in the center of the scene.
15. Barn
Select the barn from the Assets panel and drag it to the scene. Position it as illustrated in the screenshot below.
16. Barn Collider
To make sure the barn is notified when a cow hits it—enters the barn—we need to add a component, a Box Collider 2D to be precise.
Select the barn in the scene, open the Inspector panel, and click Add Component. From the list of components, select Box Collider 2D from the Physics 2D section. Make sure to check the Is Trigger box.
We want the cow to react when it hits the door of the barn so we need to make the collider a bit smaller. Open the Inspector and change the Size and Center values of the collider to move the box closer to the door of the barn.
17. Barn Collision Script
It's time to write some code. We need to add a script so the application can respond to the collision when a cow enters the barn.
Select the barn and click the Add Component button in the Inspector panel. Select New Script and name it OnCollision. Remember to change the language to C#.
Open the newly created file and add the following code snippet.
using UnityEngine;
using System.Collections;
public class OnCollision : MonoBehaviour
{
void OnTriggerEnter2D(Collider2D other)
{
if (other.gameObject.name == "cow(Clone)")
{
/* Play the save cow sound */
audio.Play();
/* Destroy the cow */
Destroy(other.gameObject);
}
}
}
The snippet checks for a collision between the object to which the script is linked, the barn, and an object named cow(Clone), which will be an instance of the cow Prefab that we'll create later. When a collision takes place, a sound is played and the cow object is destroyed.
18. Barn Sound
To play a sound when a cow hits the barn, we first need to attach the sound to the barn. Select it from the Hierarchy or Scene view, click the Add Component button in the Inspector panel, and select Audio Source from the Audio section.
Uncheck Play on Awake and click the little dot on the right, below the gear icon, to select the barn sound.
You can increase the size of the icons in Unity's user interface (gizmos) by clicking Gizmos in the Scene panel and adjusting the position of the slider.
19. Ray
Drag the ray graphic from the Assets panel to the scene and add a collider to it. This is necessary to detect a collision with the unlucky cow. Check theIs Trigger option in the Inspector panel.
20. Ray Script
Create a new script by repeating the steps I outlined a few moment ago. Name the script Bulletand replace its contents with the following code snippet:
using UnityEngine;
using System.Collections;
public class Bullet : MonoBehaviour
{
public AudioClip cowSound;
// Use this for initialization
void Start()
{
renderer.enabled = false; /* Makes object invisible */
}
// Update is called once per frame
void Update()
{
/* Get main Input */
if (Input.GetButton("Fire1"))
{
renderer.enabled = true; /* Makes object visible */
/* Play the ray sound */
audio.Play();
}
if (renderer.enabled == true)
{
transform.position += Vector3.down * (Time.deltaTime * 2);
}
/* Check for out of bounds */
if (this.transform.position.y < -1.5)
{
transform.position = new Vector2(0.08658695f, 0.1924166f); /* Return bullet to original position */
renderer.enabled = false;
}
}
void OnTriggerEnter2D(Collider2D other)
{
if (other.gameObject.name == "cow(Clone)")
{
AudioSource.PlayClipAtPoint(cowSound, transform.position);
/* Destroy the cow */
Destroy(other.gameObject);
transform.position = new Vector2(0.08658695f, 0.1924166f); /* Return bullet to original position */
renderer.enabled = false;
}
}
}
That's a lot of code, but it isn't complicated. Let's see what is happening. First, we create an AudioClip instance named cowSound, which we'll use to store an audio file. This is just another technique to play a sound if you don't want to add two audio components to the object. We declare the variable as public so we can access it from the Inspector. Click the little dot on the right of cowSound and select the audio file.
We then make the ray invisible by disabling its renderer. We use the same object so we can save resources, which is an important optimization for less powerful devices.
We detect touches on the screen, which make the ray visible and play back the ray sound (see below). If the object is visible, it means that it should be going down to hit a cow.
There's also code to detect if the ray is outside the scene's bounds. If this is the case, we reposition it, ready to fire again (check the ray's x and y values in the Inspector).
The last part checks whether the ray hits a cow. If it does, it plays the cow sound and destroys the cow. The ray is then made invisible and repositioned at its original position, ready to fire again.
21. Ray Audio Source
To add the audio for the ray, select it in the Hierarchy or Scene view and click Add Component in the Inspector panel. Select Audio Source from the Audio section. Uncheck Play on Awake and click the little dot on the right to select the sound file.
22. Add a Cow
Drag the graphic for the cow from the Assets panel and position it in the scene as shown below.
23. Rigid Body 2D
To detect a collision, at least one of the colliding objects needs to have a RigidBody2D component associated with it. As the cow can collide with both the barn and the ray, it's best to add the component to the cow.
24. Cow Collider
We also need to add a collider to the cow so we can detect collisions with the barn and the ray. Make sure to check the Is Trigger checkbox in the Inspector.
25. Move Cow Script
Add a script component to the cow and replace its contents with the following:
using UnityEngine;
using System.Collections;
public class MoveCow : MonoBehaviour
{
public Vector3 moveSpeed;
public float spawnTime = 2f;
public float spawnDelay = 2f;
// Use this for initialization
void Start()
{
moveSpeed = Vector3.left * Time.deltaTime;
InvokeRepeating("ChangeSpeed", spawnDelay, spawnTime);
}
void ChangeSpeed()
{
moveSpeed = new Vector3(Random.Range(-1, -2), 0, 0) * 0.05f;
}
// Update is called once per frame
void Update()
{
transform.position += moveSpeed;
}
}
The MoveCow class animates the cow across the screen using a variable named moveSpeed. The InvokeRepeating method changes the speed of the cow to make it sprint from the moment it reaches the center of the scene. This makes the game more challenging.
26. Create Cow Prefab
With the necessary components added to the cow, it's time to convert it to a Prefab. What is a Prefab? Let's consult the Unity Manual:
"A Prefab is a type of asset—a reusable GameObject stored in Project View. Prefabs can be inserted into any number of scenes, multiple times per scene. When you add a Prefab to a scene, you create an instance of it. All Prefab instances are linked to the original Prefab and are essentially clones of it. No matter how many instances exist in your project, when you make any changes to the Prefab you will see the change applied to all instances."
If you're coming from Flash and ActionScript, then this should sound familiar. To convert the cow to a prefab, drag the cow from the Hierarchy panel to the Assets panel. As a result, the name in the Hierarchy will turn blue.
Converting the cow to a prefab allows us to reuse it, which is convenient as it already contains the necessary components.
27. Spawner Script
The Spawner script is responsible for the cows to appear. Open MonoDevelop—or your favorite C# editor—and create a new script:
using UnityEngine;
using System.Collections;
public class Spawner : MonoBehaviour
{
public float spawnTime = 2f;
public float spawnDelay = 2f;
public GameObject cow;
// Use this for initialization
void Start()
{
InvokeRepeating("Spawn", spawnDelay, spawnTime);
}
void Spawn()
{
/* Instantiate a cow */
GameObject clone = Instantiate(cow, transform.position, transform.rotation) as GameObject;
}
}
We call the InvokeRepeating method to spawn cows using the values set by spawnTime and spawnDelay. The GameObjectcow is set to public and is created using the Inspector. Click the little dot on the right and select the cow Prefab.
28. Spawner Game Object
To instantiate the cow prefab, we'll use the graphic of the cow we've added to the scene a few minutes ago. Select it and remove its components. Then add the Spawner script.
29. Testing
It's time to test the game. Press Command + P to play the game in Unity. If everything works as expected, you are ready for the final steps.
30. Player Settings
When you're happy with your game, it's time to select Build Settings from the File menu and click the Player Settings button. This brings up the Player Settings in the Inspector panel where you can adjust the parameters for your application.
31. Application Icon
Using the graphics you created earlier, you can now create a nice icon for your game. Unity shows you the required sizes, which depend on the platform you're building for.
32. Splash Image
The splash or launch image is displayed when the application is launched.
33. Build
Once your project is properly configured, it's time to revisit the Build Settings and click the Build Button. That's all it takes to build your game for testing and/or distribution.
34. Xcode
If you're building for iOS, you need Xcode to build the final application binary. Open the Xcode project and choose Build from the Product menu.
Conclusion
In this tutorial, we've learned about the new 2D capabilities of Unity, collision detection, and other aspects of game development with Unity.
Experiment with the result and customize it to make the game your own. I hope you liked this tutorial and found it helpful.
For many years now, paper maps have become an antique. They've been replaced by dedicated GPS navigation devices and mobile applications, which have become ubiquitous. You find them in cars and, more importantly, on tablets and smartphones.
One of the main features of a navigation device is detecting the device's current position and updating it as it changes. This helps us to get from one location to another by giving us directions.
Today, we're lucky enough to have geolocation natively supported by browsers. In this article, we'll discuss the Geolocation API, allowing applications to detect and track the device's location.
The possibility to detect the device's location has a wide range of applications. On the web, for example, Google, Microsoft, and Yahoo use the user's location to personalize the SERPs (Search Engine Results Page) based on the user's location. Localization is another great fit for geolocation.
1. What is it?
The Geolocation API defines a high-level interface to location information, such as latitude and longitude, which is linked to the device hosting the implementation. The API itself is agnostic of the underlying location information sources.
Common sources of location information include the Global Positioning System (GPS) and location information inferred from network signals such as the device's IP address, RFID, Wi-Fi, Bluetooth, MAC addresses, GSM/CDMA cell IDs, and user input. No guarantee is given that the API returns the device's actual location.
The Geolocation API is a W3C Recommendation meaning that the specification is stable. We can assume that it won't change in the future unless a new version is being worked on. It's worth noting that the Geolocation API isn't officially part of the HTML5 specification, because it's been developed separately.
Now that we know what the Geolocation API is and what it can do, it's time to see what methods it exposes to developers.
2. Implementation
Accuracy
The API uses several sources to detect the device's position. For example, on a notebook or a desktop computer without a GPS chip, it's likely that the position is inferred from the device's IP address, which means that the location returned by the API isn't very accurate.
On a mobile device, however, the API can use information from multiple and more accurate sources, such as the device's GPS chip, the network connection (Wi-Fi, 3G, HSPA+), and the GSM/CDMA cell. You can expect more accurate location information on a mobile device, especially if GPS is enabled.
Privacy
The specification of the Geolocation API also discusses privacy and permissions. In fact, the specification clearly states that the user's permission needs to be explicitly obtained before enabling the API.
What this means is that the browser is required to display a notification to the user asking for their permission. An example of the message shown to the user is shown below (Google Maps).
API
The API exposes three methods that belong to the window.navigator.geolocation object. The methods provided are:
getCurrentPosition
watchPosition
clearWatch
Detecting Support
As many other APIs, detecting whether the device's browser supports the Geolocation API is very easy. Take a look at the following snippet in which we detect support for the Geolocation API.
if (window.navigator && window.navigator.geolocation) {
// I can watch you wherever you go...
} else {
// Not supported
}
Locating the Device
To detect the device's location we call getCurrentPosition or watchPosition, depending on your needs. Both methods perform the same task with only a few minor differences.
To obtain the device's location, getCurrentPosition and watchPosition make an asynchronous request. The difference between these methods is that getCurrentPosition performs a one-time request, while watchPosition monitors the device's location for changes and notifies the application when a location changes takes place.
The Geolocation API is smart enough to only invoke the success callback of watchPosition—invoked when the position is obtained—if the user's location changes.
Another important difference between getCurrentPosition and watchPosition is the return value of each method. The getCurrentPosition method returns nothing, while watchPosition returns an identifier that can be used to stop the API from monitoring the device's location through the clearWatch function.
The signatures of getCurrentPosition and watchPosition look like this:
// Get Current Position
getCurrentPosition(successCallback [, errorCallback [, options]])
// Watch Position
watchPosition(successCallback [, errorCallback [, options]])
As the signatures indicate, each function accepts three parameters. Only the first argument, the success callback function, is required. Let's take a closer look at each argument.
successCallback: This callback function is executed after successfully obtaining the user's location. The callback accepts a position object that contains the device's location information.
errorCallback: The error callback is executed when an error is encountered. The error callback accepts an error object, containing information about the type of error that occurred.
options: The options object gives the developer the ability to configure the asynchronous request.
Take a look at the following snippets to see how you can use getCurrentPosition and watchPosition to obtain the device's location.
var geolocation = null;
if (window.navigator && window.navigator.geolocation) {
geolocation = window.navigator.geolocation;
}
if (geolocation) {
geolocation.getCurrentPosition(function(position) {
console.log(position);
});
var identifier = geolocation.watchPosition(function(position) {
console.log(position);
});
console.log(identifier);
}
Stop Monitoring Location
In the previous section, I mentioned the clearWatch function. This function lets you stop monitoring the device's location, initiated by invoking watchPosition.
The clearWatch function accepts one required argument, the identifier returned to us after invoking watchPosition.
Now that we've covered the technical details of the Geolocation API, it's time to explore the position, error, and options objects returned by the Geolocation API.
Location Information
Position
The methods exposed by the Geolocation API accept or return three types of objects. The first object that's of interest to us is the position object, which contains the location information that we're interested in. Take a look at the following table to get an idea of the information it contains.
The position object that's returned from the success callbacks of getCurrentPosition and watchPosition contains a timestamp and coords property. The coords property is an object containing the location's latitude, longitude, altitude, accuracy, altitudeAccuracy, heading, and speed.
Most desktop browsers will not return a value for the altitude, altitudeAccuracy, heading, and speed properties. Mobile devices, however, such as smartphones and tablets, will provide more accurate information thanks to the presence of a GPS chip or other hardware that helps detecting the location of the device.
The timestamp property holds the time the location was detected, which can be useful if you need to know how fresh the data is that was returned.
PositionError
The error object of the error callback, the optional second argument of getCurrentPosition and watchPosition, has a code and a message property.
The message property briefly describes the type of error. The code property can have one of four values:
0: The request failed, but the reason is not known.
1: The request failed because the user didn't give permission to use the device's location.
2: The request failed as a result of a network error.
3: The request failed because it took too long to resolve the device's position.
PositionOptions
The optional third argument of getCurrentPosition and watchPosition is a PositionOptions object, enabling the developer to customize the asynchronous request.
The PositionOptions object currently supports three options:
enableHighAccuracy: If the value is set to true, the web page or application indicates that it wants the best possible—most accurate—result. This may result in a slower response time or increased power consumption. The default value is false.
timeout: This property specifies the maximum number of milliseconds after which the request should be considered out of time. The default value is Infinity.
maximumAge: When a location request is successful, the browser caches the result for later use. The maximumAge property specifies the time after which the cache must be invalidated. The default value is 0, meaning that request should not be cached.
Browser Support
Support for the Geolocation API is really good. This is true for desktop and mobile browsers. Take a look at this summary to get an idea of which desktop browsers support the Geolocation API:
Firefox 3.5+
Chrome 5.0+
Safari 5.0+
Opera 10.60+
Internet Explorer 9.0+
Support in mobile browsers is even better as you can see in this summary:
Android 2.0+
iPhone 3.0+
Opera Mobile 10.1+
Symbian (S60 third and fifth generation)
Blackberry OS 6
The Geolocation API is widely supported. You may be wondering what you can do if you encounter a browser that doesn't support the Geolocation API is. Believe it or not, several polyfills and shims exist to remedy that issue. The most notable solutions are the one created by Manuel Bieh and a lightweight shim created by Paul Irish.
Demo
Now that we know the ins and outs of the Geolocation API, it's time to see it in action. This demo is fully functional and uses all the methods and objects described in this article. The purpose is simple, every time a request to detect the device's position is performed, the location data is shown to the user in list format.
The demo contains three buttons, allowing you to select the operation you want to perform. The demo also detects if the browser supports the Geolocation API or not. If it doesn't, you'll see the message "API not supported" and the buttons are disabled.
In this article, we've learned about the Geolocation API. We've seen what it is, how to use it, and when to use it.
The Geolocation API is a useful tool to improve the user experience and can serve many purposes. Support is broad, but don't forget that older versions of Internet Explorer don't support it.
As we discussed in this article, you should be aware that some location data, such as speed, altitude, and heading, aren't always available. Also remember to use the Geolocation API with with care, because it does require significant battery power, especially on mobile devices equipped with a GPS chip.
2014-04-28T17:00:09.096Z2014-04-28T17:00:09.096ZAurelio De Rosahttp://code.tutsplus.com/tutorials/an-introduction-to-the-geolocation-api--cms-20071
New Relic has gained name and fame for being the number one solution for monitoring application performance. It tells you what you need to know about your applications to improve performance by reducing response time and increasing application throughput. It helps you track down bottlenecks and monitor your server infrastructure.
However, you're reading this article, because you're interested in mobile. Don't worry, New Relic has your back covered too. New Relic Mobile lets you monitor the performance of iOS and Android applications. Alan recently wrote about New Relic for Android so I suggest you check out his tutorial if you're interested in Android.
In this tutorial, I will show you how to integrate New Relic in an iOS application. You will learn how easy it is to set up New Relic and what it can do for your iOS application in terms of performance and making sure your users get the best possible experience using your product.
Is It Necessary?
If you think application performance monitoring is only useful if you maintain a large-scale web application like Facebook or Twitter, then you're in for a surprise. Monitoring application performance is always useful if you care about the user experience of your product and its users.
Use Cases
There are several reasons why performance monitoring is vital to the success of your application. No matter how often you talk to the users of your application or how large your group of testers is, you don't know how every one of your users is using your application and what problems they run into.
Not too long ago, I developed and maintained an iPad application that integrated with Aperture and iPhoto. Event though the concept was pretty straightforward, I was often baffled by the way people were using my application. Believe me when I say that your application will be used in ways you didn't anticipate or even thought of. This is fine and perfectly normal, but make sure you have a solution in place that tells you what you need to know about your application's health and performance so you can optimize for use cases that you didn't consider during development.
Network Issues
Another common misconception is that mobile devices are lightning fast and everyone has access to a blazing fast LTE connection. I'm afraid the truth is less rosy. New Relic lets you monitor the API requests your application makes, how long they take to complete, and how this impacts your application's user experience.
If your application fetches data from an API and that request takes several seconds to complete, then your users might ditch your application the second or third time they use it. People don't like to wait and they expect everything to be fast.
As David Smith recently pointed out in An Unexpected Botnet, your application can sometimes show unpredictable behavior, no matter how well you know the code base and the system frameworks your application interacts with. Don't wait for your users to report problems to you or, even worse, for them to look for an alternative without even telling you about the problem that made them switch.
1. Creating an Application in New Relic
Getting started with New Relic is free. Head over to New Relic's website and create an account so you can follow along. In your New Relic account, select the Mobile tab on the left and choose iOS from the list of platforms.
Give your application a name and click Continue to get started integrating New Relic in your iOS application.
2. Installing the New Relic SDK
The next step is integrating the New Relic SDK into your iOS application. To give you a head start, I've created a sample application that you can use, which you can find in the source files of this tutorial. The sample application is a simple weather client I created for another tutorial. It's a great fit for New Relic Mobile.
You have two options to install the New Relic SDK, manually or through CocoaPods. Because the sample application already uses CocoaPods, I'll be using CocoaPods to install the New Relic SDK.
Open the project's Podfile at the root of the project and update the list of dependencies as shown below.
platform :ios, '6.0'
pod 'ViewDeck', '~> 2.2.11'
pod 'AFNetworking', '~> 1.2.1'
pod 'SVProgressHUD', '~> 0.9.0'
pod 'NewRelicAgent', '~> 3.289'
To install the New Relic SDK, open a terminal window, navigate to the location of the project's Podfile, and run pod update. The beauty of CocoaPods is that it also links the project against the necessary frameworks and libraries. The New Relic SDK depends on the Core Telephony and System Configuration frameworks as well as the libz library. If you're using CocoaPods, you don't have to worry about this.
Build the project to verify everything is working as expected and no errors are thrown by the compiler.
If you're new to CocoaPods, then take a few minutes to read my tutorial on CocoaPods. CocoaPods has become the de facto dependency management tool for iOS and OS X development.
3. Integrating New Relic
Step 1
Once you've installed the New Relic SDK, integrating New Relic is easy as pie. Open your project's precompiled header file and add the following import statement.
#import <NewRelicAgent/NewRelic.h>
The precompiled header file is located in Supporting Files and ends in -Prefix.pch. The precompiled header file of the sample application, for example, is named Rain-Prefix.pch.
Step 2
To set New Relic up, open you application's application delegate and add the following snippet to application:didFinishLaunchingWithOptions:. Make sure to pass your own application token as the argument of startWithApplicationToken:.
You can find your application's token in the New Relic dashboard.
4. Run the Application
The sample application uses Forecast to fetch weather data so replace the API key in MTConstants.m with your own API key. You can create a free Forecast account at the Forecast website.
#pragma mark -
#pragma mark Forecast API
NSString * const MTForecastAPIKey = @"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX";
5. Exploring the New Relic Dashboard
Overview
Once you've successfully set up New Relic for your iOS application, it will automatically start sending data to New Relic's servers. The amount of data the SDK collects for you is astounding and the level of detail may even be a bit overwhelming. Let's take a moment to see what data New Relic has collected for our weather application.
Sign in to your New Relic account and select the Mobile tab on the left to see a list of the mobile applications New Relic is monitoring for you. This list immediately gives you an idea of the status of your application by showing you the number of active sessions, network performance, and possible problems New Relic has detected. Click Rain to further explore the data New Relic is collecting for us.
The Overview page shows you a high-level picture of how your application is performing. It shows you a number of key performance statistics, such as execution times of various operations, including the loading of views and execution of HTTP requests.
You are looking at live data, but you can adjust the timescale at the top right of the page to browse historical data.
Even though network performance is important for most mobile applications, the performance data New Relic collects isn't limited to that. If you open the Interactions tab at the top, you'll see how long certain interactions take and, more importantly, New Relic conveniently shows you which interactions are slowest.
I've tested Rain on an old iPhone 3GS running iOS 6.1.3 and it shouldn't surprise you that New Relic effortlessly shows us that our application performs slow on this device and on iOS 6.
Of course, it is up to you to decide how you use the data New Relic collects for you. The iPhone 3GS was introduced in 2009 and iOS 7 has surpassed an 80% market share so it may not be worthwhile tweaking your application to improve performance on an iPhone 3GS running iOS 6. However, it's important to understand that this too is valuable information and it allows you to make appropriate decisions in terms of development and focus.
In addition to collecting data about application performance, New Relic also collects information about application usage, such as the number of active users, device information, etc. New Relic is much more than an application performance monitoring solution.
Network
New Relic is a great solution for monitoring the performance of network operations on mobile. It shows you exactly which requests your application makes, how long they take to complete, and if any errors pop up.
This may not seem useful if you're not running your own backend, but it does help decide which requests are sent at what moment to make your application snappier and more responsive. Developers often wrongly assume that nothing can be done if an application relies on a third party for its data. This simply isn't true and New Relic helps you avoid such problems.
Alerts
Alerts are one of the most powerful and useful features of New Relic. In the Settings tab, you can set one or more custom alerts, which is especially useful if your application connects with a backend that you maintain and control.
In the next example, I've created an alert to notify me when requests to the Forecast API are becoming very slow, taking more than five seconds to complete.
Conclusion
As a developer, you always need to keep in mind that you don't know how your application is being used and under what circumstances. This implies that you cannot predict the behavior of your application for every user of your application.
New Relic is a valuable service for every iOS application that has some complexity to it. People download lots and lots of applications every day, which means they don't hesitate to ditch your application for the next best thing. It's therefore key to make sure your application performs well so your users are happy users. New Relic helps you with this.
If you've ever worked with KVO (Key-Value Observing) in Cocoa, chances are that you've run into various kinds of issues. The API isn't great and forgetting to remove an observer may result in memory leaks or—even worse—crashes. Facebook's KVOController library aims to solve this problem.
What Is Key-Value Observing?
If you're new to key-value observing or KVO, I recommend that you first read Apple's developer guide on the topic or Mattt Thompson's article on NSHipster. To quote Apple's guide on KVO, "Key-value observing provides a mechanism that allows objects to be notified of changes to specific properties of other objects." Mattt defines KVO as follows, "Key-Value Observing allows for ad-hoc, evented introspection between specific object instances by listening for changes on a particular key path." The keywords are evented and key path.
Before we discuss the KVOController library, I'd like to take a moment to explore the KVO API.
Adding an Observer
I won't cover KVO in great detail in this tutorial, but it's important that you understand the core concept of KVO. The idea is simple. An object can listen to changes to specific properties of another object. The observing object is added by the target object as an observer for a specific key path.
Let's illustrate this with an example. If objectB wishes to be notified when the name property of objectA changes, then objectA needs to add objectB as an observer for the key path name. Thanks to Objective-C's verbosity, the code to accomplish this is pretty simple.
Whenever objectA's name property changes, observeValueForKeyPath:ofObject:change:context: is invoked. The first parameter is the key path that's being observed by objectB, the second parameter is the object of the key path, the third argument is a dictionary describing the changes, and the final argument is the context that was passed as the final argument of addObserver:forKeyPath:options:context:.
I hope you agree that this isn't very elegant. If you're making extensive use of KVO, the implementation of observeValueForKeyPath:ofObject:change:context: quickly becomes long and complex.
Removing an Observer
It's important to stop listening for changes when an object is no longer interested in receiving notifications for a specific key path. This is done by invoking removeObserver:forKeyPath: or removeObserver:forKeyPath:context:.
The issue that every developer runs into at some point is either forgetting to call removeObserver:forKeyPath: or calling removeObserver:forKeyPath: with a key path that isn't being observed by the observer. The reasons for this are manyfold and are the root of the problem many developers face when working with KVO.
If you forget to invoke removeObserver:forKeyPath:, you may end up with a memory leak. If you invoke removeObserver:forKeyPath: and the object isn't registered as an observer an exception is thrown. The cherry on the cake is that the NSKeyValueObserving protocol doesn't provide a way to check if an object is observing a particular key path.
KVOController to the Rescue
Luckily, the Cocoa team at Facebook was just as annoyed by the above issues as you are and they came up with a solution, the KVOController library. Instead of reinventing the wheel, the team at Facebook decided to build on top of KVO. Despite its shortcomings, KVO is robust, widely supported, and very useful.
The KVOController library adds a number of things to KVO:
thread-safety
painless removal of observers
support for blocks and custom actions
Requirements
Before we get started, it's important to stress that the KVOController library requires ARC and that the minimum deployment targets are iOS 6 for iOS and 10.7 for OS X.
Installation
I'm a big proponent of CocoaPods and I hope you are too. To add the KVOController library to a project using CocoaPods, add the pod to your project's Podfile and run pod update to install the library.
pod 'KVOController'
Alternatively, you can download the latest version of the library from GitHub and manually add the library by copying KVOController.h and KVOController.m to your project.
Examples
Initialization
The first thing you need to do is initialize an instance of the FBKVOController class. Take a look at the following code snippet in which I create a FBKVOController instance in a view controller's initWithNibName:bundle: method.
Note that I store a reference to the FBKVOController object in the view controller's _KVOController instance variable. A great feature of the KVOController library is that the observer is automatically removed the moment the FBKVOController object is deallocated. In other words, there's no need to remember to remove the observer as this is automatically done the moment the FBKVOController object is deallocated.
Adding an Observer
You have several options to start observing an object. You can take the traditional approach by invoking observe:keyPath:options:context:. The result is that observeValueForKeyPath:ofObject:change:context: is invoked whenever a change event takes place.
However, the FBKVOController class also leverages blocks and custom actions as you can see in the following code snippets. I'm sure you agree that this makes working with KVO much more enjoyable.
[_KVOController observe:person keyPath:@"name" options:NSKeyValueObservingOptionNew block:^(id observer, id object, NSDictionary *change) {
// Respond to Changes
}];
Even though the observer is automatically removed when the FBKVOController object is deallocated, it's often necessary to stop observing an object before the observer is deallocated. The KVOController library has a number of methods to accomplish this simple task.
To stop observing a specific key path of an object, invoke unobserve:keyPath: and pass the object and key path. You can also stop observing an object by invoking unobserve: and pass in the object you want to stop observing. To stop observing every object, you can send the FBKVOController object a message of unobserveAll.
No Exceptions
If you take a look at the implementation of the FBKVOController class, you'll notice that it keeps an internal map of the objects and key paths the observer is observing. The FBKVOController class is more forgiving than the Apple's implementation of KVO. If you tell the FBKVOController object to stop observing an object or key path that it isn't observing, no exception is thrown. That's how it should be.
Conclusion
Even though KVO isn't a difficult concept to grasp, making sure observers are properly removed and race conditions don't cause mayhem is the real challenge when working with KVO.
I encourage you to take a look at the KVOController library. However, I also advise you to get a good understanding of KVO before you use it in your projects so you know what this library is doing for you behind the scenes.
Most established mobile platforms have a set of design patterns, written or unwritten guidelines of how things should look, feel, and function. Applying proven design patterns improves the usability of your product, increases conversion, and provides a feeling of familiarity to users. Ignoring standards will confuse and frustrate users and is something every designer should try to avoid as much as possible.
In this article, we take a look at design patterns on iOS by showing you a number of examples that illustrate how existing applications apply some of these design patterns.
What Are Design Patterns?
In short, a design pattern is a recurring solution that solves a specific design problem. Because it's recurring and users come across it often, they quickly become familiar with the solution the pattern provides.
The hamburger icon, for example, has become a well-known design pattern. We all know that it will open a menu when the icon is tapped. This behavior is so ingrained that it would confuse the user if tapping the icon resulted in a different action.
Whenever designers don't follow design patterns, and instead choose to implement their own solution, two outcomes are possible:
The user is annoyed or frustrated, because they don't understand what the design or interface is trying to tell them or because they were expecting a different result.
The user is delighted, because the new solution is an improvement over the existing one. We often consider this innovative design, because it may be a game-changer, replacing existing design patterns.
Be careful, though, because the line between a frustrating experience and an innovative design is often thinner than you expect it to be.
With that in mind, let's focus on iOS and see how design patterns apply to Apple's mobile platform.
Apple's Guidelines
To cultivate consistent design standards for the iOS platform, Apple provides a document known as the Human Interface Guidelines or HIG. It defines standards to which developers and designers need to adhere. Examples include the standard keyboard layout, the date picker, and the status bar.
Design Vision
However, design standards aren't limited to using consistent user interface elements. Along with the release of iOS 7, Apple also outlined their new vision regarding design, which embodies three major themes as outlined in Apple's iOS Human Interface Guidelines:
Deference. The user interface helps users understand and interact with the content, but never competes with it.
Clarity. Text is legible at every size, icons are precise and lucid, adornments are subtle and appropriate, and a sharpened focus on functionality motivates the design.
Depth. Visual layers and realistic motion impart vitality and heighten users' delight and understanding.
Look and Feel
The biggest change in iOS 7 was the way we visually present elements. Flat design was introduced to iOS users, which was a major change. Many people felt it wasn't necessarily an improvement.
Funnily enough, looking back at iOS 6, the general opinion is that skeuomorphic design is outdated. Our perceptions have clearly shifted.
When people get used to the flat design of iOS 7, it means that they get accustomed to a particular visual style. To put it differently, as a developer, you would preferably stick to the visual style of iOS 7, because that's what users have come to expect from iOS.
Of course, it isn't only about the look of your application. How it behaves and feels is also an important aspect to consider. Subtle animations have become a trademark of iOS 7. This has as much impact on the look and feel of your application as the visuals do.
The animations you use in your application matter and are part of design patterns. Users sense and appreciate refined animations, which means it's worth putting effort into them.
We use a lot of iconography during the design process of an application. Icons are an important tool for interface design patters as they have a global meaning, regardless of the context of the user.
Using icons correctly is a great start for applying design patterns, but the look and feel of these icons are also crucial. We've become familiar with flat and simple iconography. Very detailed iconography means that we don't meet the user's expectations and, as a result, break the effectiveness of the design.
Elements Supporting Design Patterns
One of the major new design patterns is the use of translucent user interface elements. The revamped control center is a good example. Apple uses translucency and blur to make the user aware of the content in the background. It helps to give the user context as they user the control center.
The use of negative space also helps making designs more efficient and usable. It's one of the key components that makes iOS 7 so much different from iOS 6. Combined with a limited set of key colors, this gives you the essential ingredients for well-thought-out user interface design. As designers and developers, we are forced to think more about the design choices we make—even the small ones.
A major, and perhaps controversial, change has been the switch to borderless buttons, a critical change in iOS design patterns. It's perhaps also one of the reasons why iOS 7 initially received a lot of criticism. It's a more extreme approach to flat design. That change perfectly illustrates the fine line between innovative design and design that results in frustration.
And then of course, for user interface elements, there are the nitty-gritty details you have to consider. Toolbar and navigation bar icons, for example, should have a tappable area of at least 44x44 points. For tab bar icons, 50x50 points is recommended. The maximum number of icons in a tab bar is five for the iPhone and iPod Touch. A complete list of recommended sizes of various user interface elements can be found in the Human Interface Guidelines.
The same applies to gestures. Using obscure or difficult-to-guess gestures for common actions leads to confused users. Using a pinch gesture to open a link seems like a pretty bad idea. Right?
Another major focal point of iOS 7 is typography. Apple encourages the use of a single, dynamic font instead of multiple fonts.
There's also a clear vision with regards to application branding. Even though we've become used to explicit branding in applications, Apple now prefers brands that are less explicit in the way they brand and promote themselves. In other words, the design or user interface should be the focus—not the brand. The application's key colors and design language are perfect for promoting a brand in a non-obtrusive manner.
iPad Interface
Design patterns not only dictate best practices for designs in general—they also get specific. Some devices have or require different standards. Some iPad interfaces are a great example.
Popovers and split view controllers, for example, are user interface elements you won't find on an iPhone or iPod Touch. These design patterns cater to larger screens like the one found on the iPad, iPad Air, and iPad Mini.
What To Remember
Prioritize and present core features first. Identify the major user stories. These should require the least amount of navigation.
Design patterns often use device-specific functionality to improve the relevance of an application and its contents. The location, for example, is often used to show relevant content to the user.
Provide navigational clues so that users always know where they are in your application.
Design patterns are often focused on optimizing the calls to action so users are repeatedly reminded of the next action they need to take. The Tumblr application illustrates this well.
User input should be as easy and simple as possible. Decrease the number of fields in a form and use default values whenever possible. Tumblr is a good example of smart defaults.
If a user interface element is tappable, then make sure this is clear for the user through the element's design.
Mobile design patterns often consist of horizontal flows rather than vertical ones. Automatically animate a new view in instead of waiting for the user to scroll down. It's important to make the experience smooth and seamless. Unlike a website, it's not necessary to confine a particular action (e.g., making a purchase) to a single view. It's often more efficient and elegant to guide the user through a series of views with a single call to action.
Finally, understand the context of mobile devices. Mobile devices are mostly used in short bursts, which is very different from how we work with a desktop or notebook.
Conclusion
Design patterns rely on common sense and practice. It's pointless to aim for innovative design when chances are that you'll end up with a frustrated user. Stick to guidelines when they're available, use established design patterns, and improve the usability of your product.
Research what solutions other applications use to solve certain problems. How do most applications handle user registration? What are tested approaches for integrating e-commerce? How is social sharing best integrated in an application? Paying attention to detail while using applications is a great way to become familiar with various design patterns.
One of the best places to get started with iOS development is our series on iOS development, Learn iOS SDK Development From Scratch. I'm excited to announce that the series has been updated for iOS 7 and Xcode 5.
The articles in this series have been updated to use Xcode 5 and storyboards. If you're considering to get your feet wet with iOS development, then Learn iOS SDK Development From Scratch is a good place to start.
During the series, you create a number of projects to really understand the concepts and patters that are covered in the tutorials. You finish the series by building a shopping list application to put everything you learned into practice.
Start learning iOS development by reading this awesome series and let you guide along the path of becoming a great iOS developer.
Testing an app on Android or iOS isn't all that different. The purpose is the same, the desired outcome is the same, and the process is similar. The major difference comes when we begin to look at the details. That's what I plan to do in this article.
1. Fundamentals
Before we dive in, let’s talk about some testing fundamentals. It’s impossible to review our options unless we know and understand the complete picture.
Challenges on Android
What makes Android stand out is the myriad of possibilities. On iOS, there's iPhone, iPad, and iPod Touch. They're different, but the common factor between iOS devices is pixel density, screen resolution, processor speeds, memory size, etc.
In the case of Android, there are thousands of combinations when we look at those same factors, screen resolution and size, processor speeds, memory size, and, the cherry on the cake, the fragmentation of the operating system.
Speaking of operating system versions, it's not uncommon for carriers and phone manufacturers to stop pushing out updates not too long after the product's release. Is this a problem? Yes. Take a look at Google’s official Android market share statistics to get an idea of the problem's scale.
In descending order of market share, we have Jelly Bean (4.1-4.3), Gingerbread (2.3), and Ice Cream Sandwich (4.0).
Compare that with the adoption rate of Apple's iOS 7. At the end of January of this year, 80% of iOS devices were running iOS 7. Mind you that iOS 7 was released in September of 2013. That's a major difference.
Study, Contrast & Compare
Have you ever used a really bad Android application? Worse than an outright bad application is a really great one that has a few persistent bugs.
I feel a large factor in good testing is paying attention to what you use, like, and hate. Hate is a strong word, but I’m sure there's something that always stands out.
Ask yourself the following questions:
What are some of my favorite apps? Why is that?
What are some bad apps that I use?
What makes an app great? Is it attention to detail?
Does that bad app freeze from time to time? Does it constantly crash? Or is its design no good?
Know What You're Working With
Let's reference that Android OS market share chart that we saw earlier. It's unrealistic to approach testing thinking that you'll support every device and every flavor of Android.
My point is that we need to think about the distribution. What does our app do and what does the target market look like? Is it a game or a utility application?
If it’s a game, the focus may likely be only newer and higher-end devices. A utility application, however, could be used by a broader audience and needs to function on a wider array of devices.
2. Approach
A problem I feel most of us run into is that we're too close to our projects. We know how we can make our app fail and how to get it to work. For this reason, I intentionally try to put my mind into that of a user. I put users in two broad categories, the Button Masher and the User.
Button Masher
The Button Masher is the type of user that will just start tapping the screen, a button here, a button there. "That last button didn't do anything. I'll hit another one."
What we can learn from this user type is where we have intensive processes within our app. If something is happening and another request or action occurs, do we spike the processor or fill up the device's memory? Does it cause the application to crash?
The other question that surfaces is "How well do we inform the user of what's going on." Why did they hit another button instead of waiting? Can we remedy this by showing a loading screen?
User
The User has intention. A better way to explain this type of user would be to look at use cases. There's a specific task that they're wanting to accomplish and a specific flow they'll try to follow.
We can learn how clear the app is at guiding a person through a process or action. It will show us where a user gets lost and what areas require more attention or refining.
We've talked through our purpose and different user types, but what are our options and how should we test them? Thankfully, there are a lot. And I recommend you do as many as you possibly can.
3. Options
Phone a Friend
If you don't have the luxury of being able to walk down to the QA department or a testing lab, you can call a friend. You need eyes and you need devices.
When it comes to testing mobile apps, volume can make a difference, especially if you can get a variety of devices.
Tools & Unit Testing
Automated testing is your friend. While nothing will beat hands-on time with a complete application, it's also important to see what's going on under the hood and how your app will react programmatically to certain situations or when put under stress.
More importantly, unit testing allows you to test as you go, which can save a lot of time during testing and QA, prior to release.
The Android JUnit extension allows developers to write unit tests against Android components and the Android API with pre-built component-specific test case classes.
Monkeyrunner is a Python-based API that allows you to write programs that control a device from a user's perspective. This means that you can create tests to run on numerous devices or emulators that will interact with your app, sending it keystrokes and capturing screenshots.
Other Testing Frameworks
There are a lot of testing frameworks available. A few of the more popular ones are Robolectric and Robotium.
Robolectric is a unit test framework that runs in your IDE. This is great for auditing your code pre-build. Robotium runs tests against the Android API in emulators. While it takes more time to complete the tests, your application code will be put through much more of a real-world test against devices and the API.
One other interesting option is Espresso. It serves a very specific purpose when compared to the previous two options. It's an API to run tests against the Android UI.
All of the above options are great, but if you're building a hybrid app, you may not be able to use them. Appium is a cross-platform automation framework that allows you to build tests with whatever language you like for both major mobile platforms.
Reporting and Analytics
It's also really helpful to be able to see some statistics and, more importantly, collect error and crash logs. This is particularly useful if you have many people testing your application, because it can become cumbersome to collect logs from every single user.
Aside from tracking app usage, Google Analytics can also send you exceptions. Flurry is another tested option. They've been around for a long time and their reporting and crash reports are a bit more detailed.
While it doesn't help during the development phase of your application, Google also collects crash reports for apps in the Play Store.
4. Third Party Options
We'd all like to have 400 devices to test on, like those massive testing labs we've seen online. However, that isn't realistic. To answer this problem, there are many services available if you’re willing to invest in testing.
These services range from one-on-one real-human tests to fully-automated testings on hundreds of devices. If you’re willing to pay for it, it’s available.
I don't have experience with most of these, but the one I have used is User Testing. It's very helpful to see a person follow your test script as they tap through your application and give you their thoughts.
Too many times I've come across the situation where QA and testing seemed like an afterthought. In reality, it's probably the most important part of the development process.
Android, with its many flavors, may seem intimidating, but when approached almost programmatically, it really just becomes part of the process. It's worth the extra time and effort. Quality apps don’t just happen.
In this tutorial, I will show you how to use the Dolby Audio API to enhance the sound in your Android applications. To show you how to use the Dolby Audio API, we will create a simple yet fun puzzle game.
Introduction
In today's crowded mobile market, it's important to make your applications as compelling as they can possibly be. Enhancing an application's audial experience can be as appealing to the user as having a stunning user interface.
The sound created by an application is a form of interaction between the user and the application that is all too often overlooked. However, this means that offering a great audial experience can help your application stand out from the crowd.
Dolby Digital Plus is an advanced audio codec that can be used in mobile applications using Dolby's easy-to-use Audio API. Dolby has made its API available for several platforms, including Android and Kindle Fire. In this tutorial, we will look at the Android implementation of the API.
The Dolby Audio API for Android is compatible with a wide range of Android devices. This means that your Android applications and games can enjoy high-fidelity, immersive audio with only a few minutes of work integrating Dolby's Audio API. Let's explore what it takes to integrate the API by creating a puzzle game.
1. Overview
In the first part of this tutorial, I will show you how to create a fun puzzle game. Because the focus of this tutorial is on integrating the Dolby Audio API, I won't be going into too much detail and I expect you're already familiar with the basics of Android development. In the second part of this article, we'll zoom in on the integration of the Dolby Audio API in an Android application.
We're going to make a traditional puzzle game for Android. The goal of the game is to slide a puzzle piece into the empty slot of the puzzle board to move the puzzle pieces around. The player needs to repeat this process until every piece of the puzzle is in the correct order. As you can see in the screenshot below, I've added a number to each puzzle piece. This will make it easier to keep track of the pieces of the puzzle and the order they're in.
To make the game more appealing, I'll show you how to use custom images as well as how to take a photo to create your own unique puzzles. We'll also add a shuffle button to rearrange the pieces of the puzzle to start a new game.
2. Getting Started
Step 1
It's not important what IDE you use, but for this tutorial I'll be using JetBrains IntelliJ Idea. Open your IDE of choice and create a new project for your Android application. Make sure to create a main Activity class and an XML layout.
Step 2
Let's first configure the application's manifest file. In the application node of the manifest file, set hardwareAccelerated to true. This will increase your application's rendering performance even for 2D games like the one we're about to create.
android:hardwareAccelerated="true"
In the next step, we specify the screen sizes our application will support. For games, I usually focus on devices with larger screens, but this choice is entirely up to you.
In the activity node of the manifest file, add a node named configChanges and set its value to orientation as shown below. You can find more information about this setting on the developer website.
android:configChanges="orientation"
Before we move on, add two uses-permission nodes to enable vibration and write access for our game. Insert the following snippet before the application node in the manifest file.
Let's also add the resources that we'll be using later in this tutorial. Start by adding the image that you want to use for the puzzle. Add it to the drawable folder of your project. I've chosen to add the image to the drawable-hdpi folder of my project.
Last but not least, add the sound files that you want to use in your game. In your project's res folder, create a new directory named raw and add the sound files to this folder. For the purpose of this tutorial, I've added two audio files. The first sound is played when the player moves a puzzle piece while the second sound is played when the game is finished, that is, when the player completes the puzzle. Both sounds are available on SoundBible. The first sound is licensed under the Creative Commons Attribution 3.0 license and was recorded by Mike Koenig.
3. Creating the Brain of the Game
As I mentioned earlier, I won't explain the game creation process in detail since the focus of this tutorial is integrating the Dolby Audio API. In the next steps, I will walk you through the steps you need to take to create the puzzle game.
We start by creating a new class, SlidePuzzle, that will be the brain of the game. Every move made in the puzzle is processed and tracked by an instance of this class using simple math.
It's an important part of the game as it will determine which tiles can be moved and in what direction. The class will also notify us when the puzzle is completed.
package com.dolby.DolbyPuzzle;
public class SlidePuzzle {
}
We'll start by declaring a number of variables that we'll need a bit later. Take a look at the next code snippet in which I declare variables for the four possible directions the puzzle pieces can move in, two arrays of integers for the horizontal and vertical directions, and an array for the tiles of the puzzle. We also declare and create an instance of the Random class, which we'll use later in this tutorial.
public static final int DIRECTION_LEFT = 0;
public static final int DIRECTION_UP = 1;
public static final int DIRECTION_RIGHT = 2;
public static final int DIRECTION_DOWN = 3;
public static final int[] DIRECTION_X = {-1, 0, +1, 0};
public static final int[] DIRECTION_Y = {0, -1, 0, +1};
private int[] tiles;
private int handleLocation;
private Random random = new Random();
private int width;
private int height;
The next step is to create an init method for the SlidePuzzle class. The init method accepts two arguments that determine the width and height of the SlidePuzzle object. Using the width and height instance variables, we instantiate the tiles array and set handleLocation as shown below.
public void init(int width, int height) {
this.width = width;
this.height = height;
tiles = new int[width * height];
for(int i = 0; i < tiles.length; i++)
{
tiles[i] = i;
}
handleLocation = tiles.length - 1;
}
The SlidePuzzle class also needs a setter and getter method for the tiles property. Their implementations aren't that complicated as you can see below.
public void setTiles(int[] tiles) {
this.tiles = tiles;
for(int i = 0; i < tiles.length; i++)
{
if(tiles[i] == tiles.length - 1)
{
handleLocation = i;
break;
}
}
}
public int[] getTiles() {
return tiles;
}
In addition to the accessors for the tiles property, I've also created a handful of convenience methods that will come in handy later in this tutorial. The getColumnAt and getRowAt methods, for example, return the column and row of a particular location in the puzzle.
public int getColumnAt(int location) {
return location % width;
}
public int getRowAt(int location) {
return location / width;
}
public int getWidth() {
return width;
}
public int getHeight() {
return height;
}
The distance method, another helper method we'll use in a few moments, calculates the distance between tiles using simple math and the tiles array.
public int distance() {
int dist = 0;
for(int i = 0; i < tiles.length; i++)
{
dist += Math.abs(i - tiles[i]);
}
return dist;
}
The next method is getPossibleMoves, which we'll use to determine the possible positions the puzzle pieces can move to. In the following screenshot, there are four puzzle pieces that can be moved to the empty slot of the puzzle board. The pieces the player can move are 5, 2, 8 and 4. Didn't I tell you the numbers would come in handy?
The implementation of getPossibleMoves might seem daunting at first, but it's nothing more than basic math.
public int getPossibleMoves() {
int x = getColumnAt(handleLocation);
int y = getRowAt(handleLocation);
boolean left = x > 0;
boolean right = x < width - 1;
boolean up = y > 0;
boolean down = y < height - 1;
return (left ? 1 << DIRECTION_LEFT : 0) |
(right ? 1 << DIRECTION_RIGHT : 0) |
(up ? 1 << DIRECTION_UP : 0) |
(down ? 1 << DIRECTION_DOWN : 0);
}
In the pickRandomMove method, we use the Random object we created earlier. As its name indicates, the pickRandomMove method moves a random piece of the puzzle. The Random object is used to generate a random integer, which is returned by the pickRandomMove method. The method also accepts one argument, an integer, which is the location we ignore, that is, the empty slot of the puzzle board.
The moveTile method accepts two integers, which are used to calculate the moves needed using basic math. The method returns true or false.
public boolean moveTile(int direction, int count) {
boolean match = false;
for(int i = 0; i < count; i++)
{
int targetLocation = handleLocation + DIRECTION_X[direction] + DIRECTION_Y[direction] * width;
tiles[handleLocation] = tiles[targetLocation];
match |= tiles[handleLocation] == handleLocation;
tiles[targetLocation] = tiles.length - 1; // handle tile
handleLocation = targetLocation;
}
return match;
}
The shuffle method is used to shuffle the pieces of the puzzle when a new game begins. Take a moment to inspect its implementation as it's an important part of the game. In shuffle, we determine the limit based on the height and the width of the puzzle. As you can see, we use the distance method to determine the number of tiles that need to be moved.
There are two more helper methods we need to implement, getDirection and getHandleLocation. The getDirection method returns the direction to which the puzzle piece at location is moved and getHandleLocation returns the empty slot of the puzzle board.
Create a new class and call it SlidePuzzleView. This class is the view of the puzzle board, it extends the View class and will take up the entire screen of the device. The class is responsible for drawing the puzzle pieces as well as handling touch events.
In addition to a Context object, the constructor of SlidePuzzleView also accepts an instance of the SlidePuzzle class as you can see below.
package com.dolby.DolbyPuzzle;
import android.content.Context;
import android.view.View;
public class SlidePuzzleView extends View {
public SlidePuzzleView(Context context, SlidePuzzle slidePuzzle) {
super(context);
...
}
}
public static enum ShowNumbers { NONE, SOME, ALL };
private static final int FRAME_SHRINK = 1;
private static final long VIBRATE_DRAG = 5;
private static final long VIBRATE_MATCH = 50;
private static final long VIBRATE_SOLVED = 250;
private static final int COLOR_SOLVED = 0xff000000;
private static final int COLOR_ACTIVE = 0xff303030;
private Bitmap bitmap;
private Rect sourceRect;
private RectF targetRect;
private SlidePuzzle slidePuzzle;
private int targetWidth;
private int targetHeight;
private int targetOffsetX;
private int targetOffsetY;
private int puzzleWidth;
private int puzzleHeight;
private int targetColumnWidth;
private int targetRowHeight;
private int sourceColumnWidth;
private int sourceRowHeight;
private int sourceWidth;
private int sourceHeight;
private Set<Integer> dragging = null;
private int dragStartX;
private int dragStartY;
private int dragOffsetX;
private int dragOffsetY;
private int dragDirection;
private ShowNumbers showNumbers = ShowNumbers.SOME;
private Paint textPaint;
private int canvasWidth;
private int canvasHeight;
private Paint framePaint;
private boolean dragInTarget = false;
private int[] tiles;
private Paint tilePaint;
public SlidePuzzleView(Context context, SlidePuzzle slidePuzzle) {
super(context);
sourceRect = new Rect();
targetRect = new RectF();
this.slidePuzzle = slidePuzzle;
tilePaint = new Paint();
tilePaint.setAntiAlias(true);
tilePaint.setDither(true);
tilePaint.setFilterBitmap(true);
textPaint = new Paint();
textPaint.setARGB(0xff, 0xff, 0xff, 0xff);
textPaint.setAntiAlias(true);
textPaint.setTextAlign(Paint.Align.CENTER);
textPaint.setTextSize(20);
textPaint.setTypeface(Typeface.DEFAULT_BOLD);
textPaint.setShadowLayer(1, 2, 2, 0xff000000);
framePaint = new Paint();
framePaint.setARGB(0xff, 0x80, 0x80, 0x80);
framePaint.setStyle(Style.STROKE);
}
We override the class's onSizeChanged method and in this method we set puzzleWidth and puzzleHeight to 0.
@Override
protected void onSizeChanged(int w, int h, int oldw, int oldh) {
puzzleWidth = puzzleHeight = 0;
}
The refreshDimensions method is invoked when the dimensions of the view change and the puzzle needs to be rebuilt. This method is invoked in the class's onDraw method.
In the onDraw method of the SlidePuzzleView class, the actual drawing of the puzzle takes place, including drawing the lines of the puzzle board, but we also set the dimensions of the puzzle pieces to make sure they neatly fit the screen of the device. The view's SlidePuzzle instance helps us laying out the view as you can see in the implementation of onDraw below.
To handle touch events, we need to override the class's onTouchEvent method. To keep onTouchEvent concise and readable, I've also declared a few helper methods, finishDrag, doMove, startDrag, and updateDrag. These methods help implementing the dragging behavior.
I've also declared getter methods for targetWidth and targetHeight and accessors for bitmap.
public int getTargetWidth() {
return targetWidth;
}
public int getTargetHeight() {
return targetHeight;
}
public void setBitmap(Bitmap bitmap) {
this.bitmap = bitmap;
puzzleWidth = 0;
puzzleHeight = 0;
}
public Bitmap getBitmap() {
return bitmap;
}
5. Creating the Activity Class
With the implementation of the SlidePuzzle and SlidePuzzleView classes finished, it is time to focus on the main Activity class that your IDE created for you. The main Activity class in this example is named SlidePuzzleMain, but yours may be named differently. The SlidePuzzleMain class will bring together everything that we've created so far.
package com.dolby.DolbyPuzzle;
import android.app.Activity;
public class SlidePuzzleMain extends Activity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
...
}
}
protected static final int MENU_SCRAMBLE = 0;
protected static final int MENU_SELECT_IMAGE = 1;
protected static final int MENU_TAKE_PHOTO = 2;
protected static final int RESULT_SELECT_IMAGE = 0;
protected static final int RESULT_TAKE_PHOTO = 1;
protected static final String KEY_SHOW_NUMBERS = "showNumbers";
protected static final String KEY_IMAGE_URI = "imageUri";
protected static final String KEY_PUZZLE = "slidePuzzle";
protected static final String KEY_PUZZLE_SIZE = "puzzleSize";
protected static final String FILENAME_DIR = "dolby.digital.plus";
protected static final String FILENAME_PHOTO_DIR = FILENAME_DIR + "/photo";
protected static final String FILENAME_PHOTO = "photo.jpg";
protected static final int DEFAULT_SIZE = 3;
private SlidePuzzleView view;
private SlidePuzzle slidePuzzle;
private Options bitmapOptions;
private int puzzleWidth = 1;
private int puzzleHeight = 1;
private Uri imageUri;
private boolean portrait;
private boolean expert;
In the activity's onCreate method, we instantiate the bitmapOptions object, setting its inScaled attribute to false. We also create an instance of the SlidePuzzle class and an instance of the SlidePuzzleView class, passing the activity as the view's context. We then set the activity's view by invoking setContentView and passing in the view object.
bitmapOptions = new BitmapFactory.Options();
bitmapOptions.inScaled = false;
slidePuzzle = new SlidePuzzle();
view = new SlidePuzzleView(this, slidePuzzle);
setContentView(view);
In loadBitmap, we load the image that you added to the project at the beginning of this tutorial and that we'll use for the puzzle. The method accepts the location of the image as its only argument, which it uses to fetch the image.
To make the puzzle game more appealing to the player, we'll add an option to customize the game by allowing the player to select an image for the puzzle from the user's photo gallery or take one with the device's camera. We'll also create a menu option for each method.
To make all this work, we implement two new methods, selectImage and takePicture, in which we create an intent to fetch the image we need. The onActivityResult method handles the result of the user's selection. Take a look at the code snippet below to understand the complete picture.
private void selectImage() {
Intent photoPickerIntent = new Intent(Intent.ACTION_PICK);
photoPickerIntent.setType("image/*");
startActivityForResult(photoPickerIntent, RESULT_SELECT_IMAGE);
}
private void takePicture()
{
File dir = getSaveDirectory();
if(dir == null)
{
Toast.makeText(this, getString(R.string.error_could_not_create_directory_to_store_photo), Toast.LENGTH_SHORT).show();
return;
}
File file = new File(dir, FILENAME_PHOTO);
Intent photoPickerIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
photoPickerIntent.putExtra(MediaStore.EXTRA_OUTPUT, Uri.fromFile(file));
startActivityForResult(photoPickerIntent, RESULT_TAKE_PHOTO);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent imageReturnedIntent)
{
super.onActivityResult(requestCode, resultCode, imageReturnedIntent);
switch(requestCode)
{
case RESULT_SELECT_IMAGE:
{
if(resultCode == RESULT_OK)
{
Uri selectedImage = imageReturnedIntent.getData();
loadBitmap(selectedImage);
}
break;
}
case RESULT_TAKE_PHOTO:
{
if(resultCode == RESULT_OK)
{
File file = new File(getSaveDirectory(), FILENAME_PHOTO);
if(file.exists())
{
Uri uri = Uri.fromFile(file);
if(uri != null)
{
loadBitmap(uri);
}
}
}
break;
}
}
}
All that's left for us to do is create a menu option for each method. The below implementation illustrates how you can create a menu with options, which is shown to the user when they tap the device's options button.
The options menu should look similar to the one shown below.
By tapping Select image or Take photo, you should be able to select an image from your device's photo gallery or take one with the camera, and use it in the puzzle game.
You may have noticed that I've also added a third menu option to shuffle the pieces of the puzzle board. This menu option invokes the shuffle method that we implemented in the SlidePuzzle class a bit earlier in this tutorial.
Before we implement the Dolby Audio API, let's create the two methods that will trigger the playback of the audio files we added earlier. You can leave the implementations of these methods blank for now. The onFinish method is invoked when the game is finished while playSound is called whenever a puzzle piece is moved.
public void onFinish() {
}
public void playSound() {
}
All that's left for us to do is invoke loadBitmap from the activity's onCreate method and pass it the location of the image you want to use for the puzzle.
Uri path = Uri.parse("android.resource://com.dolby.DolbyPuzzle/" + R.drawable.dolby);
loadBitmap(path);
Take a look at the image below for an example of what your game should look like, depending on the image you've used for the puzzle.
6. Implementing the Dolby Audio API
Step 1
As I mentioned at the beginning of this tutorial, integrating the Dolby Audio API is easy and only takes a few minutes. Let's see how we can leverage the Dolby Audio API in our game.
Start by downloading the Dolby Audio API from Dolby's developer website. To do so, create a free developer account or sign in if you already have one. Once you've downloaded the API, add the library to your project.
Step 2
Before you integrate the Dolby Audio API, it's a good idea to add volume controls to your application. This is easy to do and takes only a single line of code. Add the following code snippet to your activity's onCreate method
The next step is to declare two variables in your Activity class, an instance of the MediaPlayer class and an instance of the DolbyAudioProcessing class. Don't forget to add the required imports at the top.
We'll now make the Activity class adopt the OnDolbyAudioProcessingEventListener and MediaPlayer.OnCompletionListener interfaces.
public class SlidePuzzleMain extends Activity implements MediaPlayer.OnCompletionListener,
OnDolbyAudioProcessingEventListener {
...
}
To adopt these interfaces, we need to implement a few methods as shown in the code snippet below.
// MediaPlayer.OnCompletionListener
@Override
public void onCompletion(MediaPlayer mp) {}
// OnDolbyAudioProcessingEventListener
@Override
public void onDolbyAudioProcessingClientConnected() {}
@Override
public void onDolbyAudioProcessingClientDisconnected() {}
@Override
public void onDolbyAudioProcessingEnabled(boolean b) {}
@Override
public void onDolbyAudioProcessingProfileSelected(DolbyAudioProcessing.PROFILE profile) {}
We enable the DolbyAudioProcessing object when onDolbyAudioProcessingClientConnected is invoked and disable it again when onDolbyAudioProcessingClientDisconnected is called.
@Override
public void onCompletion(MediaPlayer mp) {
if(mPlayer != null) {
mPlayer.release();
mPlayer = null;
}
}
@Override
public void onDolbyAudioProcessingClientConnected() {
mDolbyAudioProcessing.setEnabled(true);
}
@Override
public void onDolbyAudioProcessingClientDisconnected() {
mDolbyAudioProcessing.setEnabled(false);
}
@Override
public void onDolbyAudioProcessingEnabled(boolean b) {}
@Override
public void onDolbyAudioProcessingProfileSelected(DolbyAudioProcessing.PROFILE profile) {}
As you can see in the previous code snippet, we release the MediaPlayer object when it has finished playing the audio file.
To play a sound when the player moves a puzzle piece, we need to implement the playSound method. Before we focus on playSound, we first create an instance of SlidePuzzleMain in the SlidePuzzleView class and in the view's playSlide method, we call playSound on the SlidePuzzleMain instance.
In the playSound method, we create an instance of the MediaPlayer class and make use of the Dolby Audio API to initiate the processing of the audio. If the Dolby Audio API isn't supported by the user's device, the getDolbyAudioProcessing method will return null.
As you can see below, the implementation of the onFinish method is very similar to that of playSound. The main difference is that we show a message to the user if the Dolby Audio API isn't available. As you may remember, the onFinish method is played when the game is finished and the player has completed the puzzle.
public void onFinish()
{
if(mPlayer == null) {
mPlayer = MediaPlayer.create(
SlidePuzzleMain.this,
R.raw.fireworks);
mPlayer.start();
} else {
mPlayer.release();
mPlayer = null;
mPlayer = MediaPlayer.create(
SlidePuzzleMain.this,
R.raw.fireworks);
mPlayer.start();
}
mDolbyAudioProcessing = DolbyAudioProcessing.getDolbyAudioProcessing(this, DolbyAudioProcessing.PROFILE.GAME, this);
if (mDolbyAudioProcessing == null) {
Toast.makeText(this, "Dolby Audio Processing not available on this device.", Toast.LENGTH_SHORT).show();
shuffle();
}
}
We also call shuffle at the end of onFinish to start a new game when the player has finished the puzzle.
Step 5
It's important that we release the DolbyAudioProcessing and MediaPlayer objects when they are no longer needed. Releasing these objects ensures we don't compromise the device's battery life and negatively impact the device's performance.
We start by declaring three methods. The first method, releaseDolbyAudioProcessing, releases the DolbyAudioProcessing object and sets it mDolbyAudioProcessing to null. The second method, restartSession, restarts the session managed by the DolbyAudioProcessing object and in the third method, suspendSession, the audio session is suspended and the current configuration is saved for later use.
public void releaseDolbyAudioProcessing() {
if (mDolbyAudioProcessing != null) {
try {
mDolbyAudioProcessing.release();
mDolbyAudioProcessing = null;
} catch (IllegalStateException ex) {
handleIllegalStateException(ex);
} catch (RuntimeException ex) {
handleRuntimeException(ex);
}
}
}
// Backup the system-wide audio effect configuration and restore the application configuration
public void restartSession() {
if (mDolbyAudioProcessing != null) {
try{
mDolbyAudioProcessing.restartSession();
} catch (IllegalStateException ex) {
handleIllegalStateException(ex);
} catch (RuntimeException ex) {
handleRuntimeException(ex);
}
}
}
// Backup the application Dolby Audio Processing configuration and restore the system-wide configuration
public void suspendSession() {
if (mDolbyAudioProcessing != null) {
try{
mDolbyAudioProcessing.suspendSession();
} catch (IllegalStateException ex) {
handleIllegalStateException(ex);
} catch (RuntimeException ex) {
handleRuntimeException(ex);
}
}
}
/** Generic handler for IllegalStateException */
private void handleIllegalStateException(Exception ex)
{
Log.e("Dolby processing", "Dolby Audio Processing has a wrong state");
handleGenericException(ex);
}
/** Generic handler for IllegalArgumentException */
private void handleIllegalArgumentException(Exception ex)
{
Log.e("Dolby processing","One of the passed arguments is invalid");
handleGenericException(ex);
}
/** Generic handler for RuntimeException */
private void handleRuntimeException(Exception ex)
{
Log.e("Dolby processing", "Internal error occurred in Dolby Audio Processing");
handleGenericException(ex);
}
private void handleGenericException(Exception ex)
{
Log.e("Dolby processing", Log.getStackTraceString(ex));
}
As you can see in the above code snippet, I've also created a handful of methods to handle any exceptions that may be thrown in releaseDolbyAudioProcessing, restartSession, and suspendSession.
The three methods we just created need to be invoked at several key moments of the application's lifecycle. We accomplish this by overriding the onStop, onStart, onDestroy, onResume, and onPause methods in our SlidePuzzleMain class.
In onStop, we tell the MediaPlayer object to pause and in onStart the MediaPlayer object continues playback if it isn't null. The onDestroy method is called when the application is closed. In this method, we release the MediaPlayer object, set mPlayer to null, and invoke releaseDolbyAudioProcessing, which we implemented earlier.
@Override
protected void onStop()
{
super.onStop();
if (mPlayer != null) {
mPlayer.pause();
}
}
@Override
protected void onStart() {
super.onStart();
if (mPlayer != null)
{
mPlayer.start();
}
}
@Override
protected void onDestroy() {
super.onDestroy();
Log.d("Dolby processing", "onDestroy()");
// Release Media Player instance
if (mPlayer != null) {
mPlayer.release();
mPlayer = null;
}
this.releaseDolbyAudioProcessing();
}
@Override
protected void onResume() {
super.onResume();
restartSession();
}
@Override
protected void onPause() {
super.onPause();
Log.d("Dolby processing", "The application is in background, supsendSession");
//
// If audio playback is not required while your application is in the background, restore the Dolby audio processing system
// configuration to its original state by suspendSession().
// This ensures that the use of the system-wide audio processing is sandboxed to your application.
suspendSession();
}
Finally, in onPause and onResume we suspend and restart the audio session by invoking suspendSession and restartSession respectively.
If you've followed the steps outlined in this tutorial, then your game should now be fully functional with the Dolby Audio API integrated. Build the project to play with the final result.
7. Summary
I'm sure you agree that integrating the Dolby Audio API is easy and doesn't take more than five minutes. Let's briefly summarize the steps we've taken to integrate the API.
import the Dolby Audio API library
create an instance of the DolbyAudioProcessing class
implement the OnDolbyAudioProcessingEventListener interface.
enable the DolbyAudioProcessing instance in onDolbyAudioProcessingClientConnected
disable the DolbyAudioProcessing instance in onDolbyAudioProcessingClientDisconnected
after starting the media player, initialize the DolbyAudioProcessing instance using the GAME profile
check if the DolbyAudioProcessing object is null to verify if the Dolby Audio API is supported by the device
to conserve battery life and optimize performance, suspend and release the DolbyAudioProcessing instance when the application is destroyed or moved to the background
Conclusion
Even though the game we created is fairly simple, it is important to remember the focus of this tutorial, the Dolby Audio API. The mobile market is a crowded place and standing out from other games isn't easy. Adding superior sound to your game won't go unnoticed by your users and it will get your game noticed. Head over to Dolby's developer website to give it a try.
Have you heard of Stencyl? Stencyl is a game engine that allows you to easily create applications and games for iOS and Android. The platform also allows you to publish your applications as Flash games—for the web—or for Windows, Linux, or OS X.
The best part is that you don't need to write a single line of code. That's right. You can simply drag and drops blocks of code to create behaviors for the actors of your application. Excited? Let's get started.
1. Introduction
Stencyl is a game engine for everyone—from complete beginners to advanced developers. Since Stencyl 3.0, projects make use of the Haxe programming language. That's right. You can write code in Stencyl if you want to, but it's no requirement. Note that Stencyl can only create 2D games. If you want to create 3D games, then I suggest you take a look at a platform like Unity.
Initially released in 2011 as StencylWorks, Stencyl allows complete novices to create 2D games for computers and mobile devices. The platform, developed by Jonathan Chung, makes use of Box2D for physics and collisions engine and relies on the OpenFL software development kit. These components, together with the Haxe programming language, power Stencyl and make that games can be written once and run everywhere.
Stencyl supports a wide range of platforms:
iOS
Android
Flash
Windows
OS X
Linux
The building blocks of a typical Stencyl game fall into one of four categories:
Actors: An actor can be the player, an enemy, a tree, or something else. An actor usually takes the form of an image, or a series of images, creating an animation. For example, if we were to make a game like Super Mario Bros., the game would include an actor for Mario, the Bowser, and actors for the mushrooms.
Scenes: A game can have many scenes. A game usually has a main
menu or the starting scene, scenes for the levels of the game, and a game
over scene.
Behaviors: Behaviors are ready to use, reusable abilities that you can assign to actors and scenes in your game. You can also create your own behaviors to make your game unique and challenging.
Events: Events are custom blocks of commands that you can create and assign to an actor. You can create events for actors through the use of Stencyl's Event Editor, which we'll see later in this tutorial.
In this tutorial, I'll be using the Windows version of Stencyl. The Mac and Linux versions should be mostly the same with only a few minor differences.
2. Installation
Now that we know what Stencyl is, let's install Stencyl and set it up. Visit Stencyl's Official Download Page and download the version for your operating system. Follow the installation instructions, choose your designated installation directory, and wait a few minutes. Once the installation is finished, fire up Stencyl to get started.
With Stencyl up and running, feel free to check out the sample games it includes and take a look at the components we discussed earlier, actors, scenes, behaviors, and events. You can also download a few other sample games and kits at Stencyl's Developer Center or by visiting the Arcade section for inspiration.
3. Your First Application
In the rest of this tutorial, we're going to create a simple application that displays some text and an image. We first need to create a game to place our text in. You can do so by opening Stencyl and choosing Create New > Game > Blank Game from the File menu. Enter a name for your project and click the create button to get started.
The next step is to create a scene. Choose Create New > Scene from the File menuor navigate to Scenes under Resources (on the left sidebar) and click Create New. Give the new scene a name and stick with the defaults for now. After clicking Create, the scene editor should show up.
With the first scene ready to use, it's time to add an event to the scene. Click the Events tab, and select Basics > When Drawing from the Add Event menu. With the newly created event selected, inspect the sidebar on the right and click the button labeled Drawing. Do you see the block titled draw text anything at (x: 0 y: 0)? Drag it into the when drawing event we created a moment ago and enter Hello Worldat (x: 100 y: 100) as shown below.
It's now time to create an actor. We're going to show the following picture in our game.
We first need to create a new actor. Open the Dashboard tab and select Create New from the Actor Types menu. Type Star in the name field and hit Create.
You should now see the actor editor, which is currently empty. Click the editor to add a new animation and then click the Frames section to add a new frame. In the top left, click the Choose Image... button and select the star image you saw earlier. Click the Add button.
Head back to the first scene you created earlier. With the Scene tab selected at the top, open the Palette tab in the right sidebar and select the Actors section. Do you see the actor you just created? Select it and drag it to the scene. Click once to add it to the scene.
4. Exporting Your Application
The hardest part is done. It's time to test the application. The easiest way to test your app is to run it as a Flash application. In most cases, running the app as a Flash application is very similar to running it on an Android device.
Click the button labeled Test Game in the top right of the editor with Platform set to Flash (Player). It should only take a few moments to create the Flash application. If you don't run into any issues, Stencyl should automatically launch your application in a separate window. That's it. You've just created your first Stencyl application.
If you want to compile your application for Android, you need to take care of a few extra steps.
Install the JDK or Java Development Kit. Note that the Stencyl website recommends you use JDK 6 and avoid version 7.
On your Android phone, enable USB Debugging and disable USB Mass Storage.
Connect your Android phone with your development machine and make sure it doesn't go to sleep while it compiles the application.
In Stencyl, press Control-Shift-5 to show the log window. This will be very helpful if Stencyl runs into problems during the compilation of your application. Choose Android from the Platform menu and click Test Game. You should see a message telling you that Stencyl is compiling the application. After compilation, it will display the message Sending to Device. If all went well, then your application is running on your Android device.
You may need to wait a few moments for the compilation to complete. If you notice that the compilation takes a long time, then inspect the logs to see if any errors have popped up. If something went wrong, you may want to save the logs and post them on the Stencyl Forums to get help from the community.
A common error you may run into is that the application binary isn't sent to the Android device. This is easy to fix though. On Windows, open a file explorer and navigate to C:\Users\<YourUserName>\AppData\Roaming\Stencyl\stencylworks\games-generated\<YourGameName>\Export\android\bin\bin. Make sure to change <YourUserName> to your user name and <YourGameName> to the name of the game. If your application was compiled successfully, you should see your application's .apk file in that directory. If you named your application Mygame, the .apk file should be named mygame.apk. Copy the .apk file to your Android device, download a free file viewer like ES File Explorer, navigate to the .apk file, and open it.
5. Finding Help
If you're creating a game with Stencyl and you find yourself stuck, then one of the best ways to solve your problem is by visiting the official Stencyl Forums and ask your question. You can also visit Stencylpedia, Stencyl's official wiki, and find an answer to your question there.
If you want to become a more experience Stencyl user, then check out some books and courses on Stencyl or visit the extensions market to download extensions that make your game better and easier to build.
Monetizing your application with Stencyl is easy as well. Here are some ways to make money with your Stencyl application:
In this tutorial, you learned about Stencyl as a platform and you learned about the basic components of a Stencyl project, scenes, actors, events, and behaviors. We also saw how to export an application to Flash and Android. I hope you enjoyed this tutorial. If you have questions, feel free to leave a comment below.
Previously, you learned about design patterns and how they applied to the iOS platform. In this article, we take a closer look at design patterns on the Android platform and they differ from design patterns on iOS. If you're unfamiliar with design patterns on Android, then this article is for you.
If you haven't read my previous article about design patterns on iOS, then I encourage you to take a few minutes to read that article before continuing. Some of the concepts and terminology used in this article were introduced in Introduction to iOS Design Patterns.
Android Design Principles
Every operating system or platform has guidelines for design. Consistent design helps to create a distinct look and feel for the operating system or platform. When you’re working on an application that will target both iOS and Android, for example, you'll learn that there are a number of subtle—and not so subtle—differences regarding the vision that underlies the design. These are some examples of aspects which are emphasized differently than other mobile operating systems.
Personalization: The Android guidelines recommend that you include a level of personalization within an application as it helps make users feel at home. Giving the user the ability to theme an application is a good example of this concept.
Icons Over Words: If you can communicate something through visuals, such as icons or images, then that should be the preferred method of communication. If you come across a scenario in which words are absolutely necessary, then make sure to keep it concise and actionable.
Every User Is An Expert: Mobile applications should always be easy to use. At the same time, you should give the user the impression that they're a power user. This can be accomplished by providing shortcuts or by implementing a powerful onboarding process.
Standard Android Interface Themes
As a starting point, the Android platform currently provides two standard themes when you want to design applications for the platform. These themes are useful and recommended when you're just starting out, but they are by no means mandatory.
The themes are called Holo Light and Holo Dark, a light and a dark theme, depending on the visual style you aim to achieve in your application.
Using one of the standard themes will help you when you're just starting out and learning more about Android's design patterns. Because the standard themes are a part of Android, you start from a solid foundation that conforms to the standards of the operating system. It’s also a great starting point for custom designs.
Application Icons
Of course, it's not just about the application's interface. Even since flat design became popular on iOS, a clear difference has emerged between application icons on Android and iOS.
Most Android applications have application icons that create a sense of depth to mimic the real world. Many application icons on Android have a subtle 3D effect applied to them to create that sense of depth. Another noticeable difference between icons on Android and iOS is the use of transparency.
If you're developing an application that target both Android and iOS, then it's important to create a concept that works well on both platforms.
Interaction Design
When a user interacts with an application's user interface, she expects some form of visual feedback. This feedback is generally more explicit on Android than it is on iOS.
Visual feedback is great for improving the user experience, because it creates a feeling of responsiveness and control, which are two aspects users have come to expect of mobile applications.
I encourage you to download some of the popular applications of each platform and explore how they respond to your input.
Differences Between Android & iOS
It shouldn't be a surprise that a number of components are unique to or different on the Android platform.
Note that these are just a few examples. The best advice I can give you when you’re creating an application for a platform you’re not familiar with is to get input from users and developers who are familiar with the platform. They can quickly point out what feels odd or wrong and what isn't.
Fixed Tabs
In iOS, a tab bar is usually found at the bottom of the screen in an application's user interface. In Android, however, tabs are positioned at the top. Android supports multiple and scrollable tabs while iOS has specific guidelines in which use cases tabs are a good fit and the guidelines even specify the maximum number of tabs.
The result is that a user interface designed for the Android platform may not always work on iOS due to technical limitations. Scrollable tabs, for example, are unique to Android and not supported by the iOS SDK.
Drop-Down Menu
A drop-down menu is something you only find on Android. There are different use case in which a drop-down menu is a good fit. When you have many functionalities or a lot of content groups with inconsistent navigation, a drop-down menu is a viable solution to tackle the problem.
Back & Up
Since Android 4.0, there is a sticky navigation bar at the bottom of the screen. Apart from the recent button, the navigation bar also contains a back and up button. Tapping back bring the user to the previous screen while tapping up brings you to the upper level in the application's hierarchy.
For example, I could be swiping through different pieces of content in an application. The back button would bring me back to the previous item while tapping up would navigate me to the selection menu from which I’m able to select a new item. This is a powerful feature, which you should take into account while creating the information architecture of an application.
In Android, you generally don't see custom back buttons like the ones we usually see in the navigation bar in iOS applications.
Widgets
Android also supports widgets, which are completely absent on the iOS platform.
Exploring Design Patterns
Let's dive deeper in some of the design patterns on Android. In general, the best practices to create a user interface is to see how other applications solve a particular design problem and how standard interface elements can solve it.
For example, how do other applications display content? How do they handle user input? Or implement e-commerce? By looking at a few examples, you get an idea of the possible solutions to solve the design problem you're facing. Does this mean you should blatantly copy an existing application? Not really.
A well thought out app tries to solve a specific problem. Usually, your app tries to tackle a unique problem, which means that you need to figure out a suitable user flow.
The following list should give you an introduction to think about possible design patterns for your application:
Show, don’t tell. Teaching users about the functionalities of your app should happen through actions rather than telling users. In general, tutorials should be avoided if possible.
Focus. A screen should preferably have a single goal from the user's perspective.
Encourage the exploration of diverse features through efficient navigation. Open five applications and see how each implements navigation. Learn the benefits and downsides of each implementation.
Define the data model and user flow before building and designing your application.
When building a new application, search for similar applications to learn from. What does the user flow look like? How is the user interface designed and how can it be improved?
Applying design patterns boils down to thinking abstractly about building the most efficient application with a strong focus on the user flow and interface.
Involve Android users in the design process to get early feedback and a fresh look on the current state of your design.
Conclusion
Learning more about Android design patterns means that you need to become more familiar with the Android platform. Creating a user interface that closely resembles the standards of the operating system means users have a greater sense of familiarity and an improved user experience.
Keep in mind that we only briefly touched the subject. There are many resources available that will help you improve your understanding of design patterns on Android. However, you learn most by creating user flows and user interfaces.
In my previous article, you learned about the basics of JSONModel. You saw how easy it is to work with JSON using JSONModel and how it does a lot for you behind the scenes, such as data validation and conversion.
In this tutorial, you will create a more complex application and you will learn about a number of features that bring even more power to your model classes.
By the end of this article, you'll have created a Flickr browser for iOS. The application will talk to Flickr's JSON API and display a collection of photos I uploaded to Flickr for this tutorial.
You will learn how to:
process complex API responses
use JSONModel's data transformations for URLs and dates
map JSON keys to properties with different names
create custom data conversions
preprocess API responses before they are parsed
The finished Flickr browser will look like this:
1. Project Setup
Because we will focus on learning about JSONModel's features, I'd like to begin with a head start by starting with a basic project. You can find the project in the source files of this tutorial.
Open the Xcode project and take a moment to inspect its contents. The project has a simple setup, a single table view controller showing an empty table view. I've also included two libraries in the project, JSONModel and SDWebImage.
Build and run the project to make sure that the project compiles without errors.
2. Response Model
Let's have a look at Flickr's public photos feed. Open the API endpoint in your browser and take a moment to inspect the response. The response only includes a few top-level keys, such as title and modified. There's also a key called items containing a list of photo objects each with a title, link, description, etc.
In the following screenshot, I've highlighted one of those photo objects in the items array to help you figure out its properties.
Let's start by creating a model class for the response of the public photos feed. Create a new class in Xcode, name it PublicPhotosModel, and make it inherit from JSONModel. You are interested in the title and modified keys of the JSON response, so add two properties with the same names to the interface of the PublicPhotoModel class. You will also fetch the array of photos and store them in a property we'll name items. Set the type of the items property to NSArray as shown below.
In this example, you use one of the built-in data transformers of JSONModel. When you have a date in your JSON response that adheres to the W3C format, you can declare the matching model property of type NSDate and JSONModel will know what to do.
Your basic model class to fetch JSON data from the Flickr API is ready. Let's connect it to the table view in your view controller. You will use the items array in your model object, representing the response of the Flickr API, as the data source of the table view.
Note: You may be wondering how to convert dates if they don't conform to the W3C standard or if the JSON response includes timestamps. Keep reading. Before the end of this tutorial, you will know how to transform any value to an Objective-C object.
3. Fetching JSON from the Flickr API
Start by opening ViewController.m and, under the existing import statement, add an import statement for the model class we just created:
#import "PublicPhotosModel.h"
Next, declare a private property to store the PublicPhotosModel instance we'll be using:
You'll fetch the JSON data using a helper method, fetchPhotos, which we'll implement shortly. We invoke fetchPhotos in the view controller's viewDidLoad method:
The implementation is pretty similar to what you wrote in the previous article on JSONModel. You first declare photosURL, which contains the URL of a particular feed of photos on Flickr, and then create and fire a NSURLSessionDataTask instance to fetch the list of photos of that feed.
Note: If you have a Flickr account and know your Flickr ID, then feel free to include it in the request URL to have the application fetch your own feed of photos.
If you've read the first article on JSONModel, then you already know how to turn the NSData object, returned by the data task, into a JSONModel object. Flickr, however, doesn't always return a valid JSON response. In other words, you'll need to do some preprocessing before creating your model object.
The Flickr API has a special feature that escapes single quotes in the JSON response. The problem is that this renders the JSON response invalid according to the current standards and, as a result, the NSJSONSerialization API cannot process it.
To fix this, you only need to remove the escaped single quotes in the JSON response. You can then safely create your model object. Replace // Process Data with the following snippet:
You start by creating an NSString object from the NSData instance the data task returns to you. It's safe to assume the text is UTF8 encoded since Flickr uses only UTF8. You then replace all occurrences of \' with ' to prepare the JSON response for JSONModel.
Because you already have the JSON response as a string object, you can use the custom JSONModel initializer, initWithString:error: to create the model instance. You use GCD to update the user interface on the main thread. The view controller's title is updated with the title property of the PublicPhotosModel instance and the table view is reloaded.
Build and run the project to check that the title is set, which indicates that the model object is properly initialized. Give the application a moment to fetch the JSON data from Flickr's API. You should then see the title of the feed at the top of the screen:
If, for some reason, you don't see the title of the feed as in the above screenshot, then add a log statement in the completion handler of the data task to debug the issue. If you want to check if an error was thrown while creating the model object, then update the initialization of the model object as follows:
NSError *err;
self.photosModel = [[PublicPhotosModel alloc] initWithString:rawJSON error:&err];
if (err) {
NSLog(@"Unable to initialize PublicPhotosModel, %@", err.localizedDescription);
}
As you can see, JSONModel uses the standard Cocoa error handling paradigm, which means that you can check if initWithString:error: throws an error.
4. Implement the Table View
At the moment, JSONModel treats the array of items as an ordinary array, containing NSDictionary objects. This is fine for now, but we'll create a proper photo model later in this tutorial. It's time to populate the table view with the items in the array.
Let's start by building the user interface. First, you'll set the title of the table view section header, which will display the last modification date of the Flickr feed. You can use a NSDateFormatter instance to convert the NSDate object to a readable string and return it from tableView:titleForHeaderInSection::
Next, add the two required methods of the table view data source protocol to tell the table view how many sections and rows it contains. Use self.publicPhotos.items as the table view's data source:
Because the image view of the UITableViewCell class doesn't load remote images asynchronously, you'll need a custom UITableViewCell subclass. Create a new Objective-C class, name it ImageCell, and make it a subclass of UITableViewCell. Open ImageCell.h and add a property of type UIImageView, webImageView:
Open ImageCell.m and overwrite the initializer Xcode put in there for you. In initWithStyle:, you need to hide the default image view and create a new custom image view. Believe it or not, but that's what it takes to load images asynchronously in a table view cell.
Are you confused by the second half of the implementation? You create a blank image of 20px by 20px and set it as the image of the cell's default image view. You do this to position the cell's text label properly. This happens even before your the image for your custom image view is loaded from the web.
Revisit ViewController.m and, under the existing import statements, add an import statement for the custom UITableViewCell class we created.
#import "ImageCell.h"
You're ready for the final piece of the puzzle, the data source method to create the cells for your table:
Build and run the project one more time to see that the table now displays one cell for each of the objects found in items. Speaking of items, it's time to create the photo model.
5. Photo Model
As we saw in the previous tutorial, you need to create a separate model class for a list of objects in the JSON response, the list of photo items in our example. Create a new Objective-C class, name it PhotoModel, make it a subclass of JSONModel.
Have another look at the raw JSON response you receive from the Flickr API and decide what keys each photo object needs to have:
You want to fetch the title, the URL of the photo, when it was published, and the link to the detail page on Flickr. We have a problem though. The URL of the photo is enclosed in yet another object under the media key. Does this mean you need to create another JSONModel subclass only to extract the single key, m, containing the URL of the photo?
Fortunately, the short answer is no. To elegantly solve this problem, you need to learn and understand how key mapping works in JSONModel. Mapping keys is a simple way to instruct your JSONModel subclass how to extract data from a JSON response, which is especially useful if the JSON keys don't exactly match the names of your class's properties.
Start by declaring the properties we need in the PhotoModel class:
We use two of the built-in data transformers of JSONModel. The published is of type NSDate and JSONModel will make sure to convert the W3C date to an NSDate object. The url and link properties are of type NSURL and JSONModel will convert the corresponding strings of the JSON response to NSURL objects.
Open PhotoModel.m and add the following code snippet to set up key mapping for the photo model:
In keyMapper, you override JSONModel's keyMapper method. It returns a JSONKeyMapper instance, the object that maps JSON keys to property names. The JSONKeyMapper class has a convenient initializer that accepts a dictionary and creates a key mapping between the JSON data and your class.
In the above implementation of keyMapper, you define the following key mapping:
The published key in the JSON response maps to the model's date property.
The m key of the media object in the JSON response maps to url in the model.
With keyMapper implemented, JSONModel can parse the JSON response and initialize the photo model as defined by the PhotoModel class.
Before moving on, open PhotoModel.h once more and, at the top, declare a protocol with the same name as the name of the class:
You now need to make a couple of adjustments in your ViewController class. In order to load and display photos in your table view's cells, you'll use a method declared in the SDWebImage library, which was included in the project you started with. Open ViewController.m and add a new import statement at the top:
#import "UIImageView+WebCache.h"
Next, revisit your implementation of tableView:cellForRowAtIndexPath: in which you currently only display the row number. However, because you can now fetch the corresponding PhotoModel object for each row in the table view, it is better to display the photo's details instead. Update the implementation as shown below:
You first fetch the PhotoModel object corresponding to the row in the table view and you then populate the cell's text label with the title of the photo. You use SDWebImage's setImageWithURL:placeholderImage: to asynchronously load and display the photo from the given URL.
Believe it or not, you've already got a working photo stream. Build and run the project to see the result:
7. Custom Data Transformations
In this section, you're going to create a custom feature to the PhotoModel class, which will convert a string from the JSON response to a custom Objective-C class. This will teach you how to convert any JSON data to any Objective-C class.
In the JSON data for a photo, there's a tags key that contains a string of tags. You'll add a new property to the PhotoModel class. The type of this property will be a custom Objective-C class that can handle tags.
Note: You're not limited to convert JSON data to custom Objective-C classes. You can convert JSON data to any Cocoa class. For example, you can convert a hex color, such as #cc0033, to its UIColor equivalent. Keep reading to see how to do that.
Create a new class, name it Tags, and make it a subclass of NSObject. Open Tags.h and add a property to store the list of tags and declare a custom initializer:
#import <Foundation/Foundation.h>
@interface Tags : NSObject
@property (strong, nonatomic) NSArray* tags;
#pragma mark -
#pragma mark Initialization
- (instancetype)initWithString:(NSString*)string;
@end
Switch to Tags.m and implementation the initializer you just declared. As you can see, there's nothing special about it. You use the string to create an array of tags and store the tags in tags:
You now have a custom Tags class, but how do you use it in your photo model? Open PhotoModel.h, import the new class at the top, and declare a new property in the class's interface:
Build and run your project as it is to see what will happen.
Because the tags property is of type Tags, which JSONModel does not know how to handle, the application will crash and you should see the following error message in XCode's console:
It's time to become familiar with a new class of the JSONModel library, the JSONValueTransformer class. In most cases, the JSONValueTransformer works behind the scenes and converts basic data types for you, NSNumber to a float, NSString to NSNumber, or NSString to NSDate. The class, however, cannot deal with custom classes, because it doesn't know how to work with them.
The nice thing about JSONValueTransformer is that you can extend it to help it learn how to handle custom classes—or any Cocoa class for that matter.
Select New > File... from Xcode's File menu and choose Objective-C category from the list of templates. Click Next to continue.
Name the category Tags and set Category On to JSONValueTransformer. Click Next to continue.
In this category on JSONValueTransformer, you can define the necessary methods for handling properties of type Tags. Open JSONValueTransformer+Tags.h and import the header file of the Tags class. Next, add the following two methods to the interface of the category:
Let's take a closer look at the names of these methods.
TagsFromNSString: consists of the name of the class or type you want to convert to, Tags, followed by From and then the type in the JSON data for the respective key, NSString. In short, when JSONModel finds a property of type Tags, it will try to match a JSON key of type NSString. When a match is found, it will invoke TagsFromNSString:.
JSONObjectFromTags: handles the reverse conversion. When JSONModel exports your model object back to JSON data, it needs to invoke a method that will take the Tags object and return a proper string. Thus the name of the method is JSONObjectFrom followed by the name of the class or type of the property, Tags.
Once you define these two methods, any JSONModel subclass will be able to handle objects of type Tags. Adding a category on JSONValueTransformer is a very easy way to adds functionality to your application's model classes.
Let's now look into the implementation of the two methods in our category. Let's first implement the method that accepts an NSString object and returns a Tags object:
Thanks to the custom initializer, initWithString:, the implementation is simple. It takes the string of tags from the JSON data and returns a Tags object, which is assigned to your tags property in the PhotoModel class.
Next, implement the second method, which is invoked when the model object is converted to a string. This is the method that will get called when you invoke JSONModel's toDictionary and toJSON.
When a PublicPhotosModel instance is initialized, it will automatically create PhotoModel objects and store them in the items property. Each PhotoModel object will also create a Tags object for its tags property. All of this happens automatically thanks to the category we created on JSONValueTransformer.
Let's now make use of the tags property in the PhotoModel class. Open ViewController.m and update the implementation of tableView:cellForRowAtIndexPath: by populating the cell's detail text label with the photo's list of tags.
Build and run the project. You should see the tags of each photo listed below the photo's title.
To make our Flickr browser complete, implement tableView:didSelectRowAtIndexPath: of the UITableViewDelegate protocol. In tableView:didSelectRowAtIndexPath:, we fetch the corresponding photo and open the photo's detail page in Safari.
When you tap a row in the table view, you will be taken to the photo's detail page on Flickr:
Conclusion
In this tutorial, you used more complex and powerful features of the JSONModel library. I hope you can see what a time saver JSONModel can be and how it can help you on many levels in your your iOS and OS X projects. If you want to learn more about JSONModel, I encourage you to explore the library's documentation.
With Google Play Games Services, you can build a range of features into your Android apps, including leaderboards, achievements, multiplayer gameplay, cloud storage, and Google+ sign-in.
In this tutorial we will work through the steps you need to take to add achievements to a simple Android game. We will prepare the development environment to use Google Play Game Services, define an achievement in the Developer Console and implement the achievement interaction in the game.
1. Prepare Your IDE
Step 1
To utilize the Google Play Game Services tools, we need to prepare our IDE. As well as using the Google Play Services library—which you need for all Google services—we will use the BaseGameUtils resource, which contains a number of classes that are particularly useful when developing games.
Start by creating a new app in your IDE. The sample code of this tutorial contains a simple game in which the user has to guess a number chosen at random. You can use that project to start developing with Google Play Game Services if you like. Create a new Android project and choose names and settings for it.
If you're not using the sample app in the download, you may wish to implement your gameplay at this point, bearing in mind what you are going to use an achievement for. For the sample app, we will simply award an achievement when the user chooses a correct answer.
Step 2
In this step, we get the IDE and our project ready to use Google Play Games Services and the utilities. Open your Android SDK Manager, which you can find under the Window menu in Eclipse. Scroll down until you see the Extras folder, expand it, and select Google Play Services and the Google Repository. You may also need the Google APIs Platform if you plan on testing on the emulator, so select that as well. You can find the latter in the directories for recent versions of the Android platform. Install the selected packages, accepting any licenses as necessary.
Step 3
We also need to include a couple of resources in the actual workspace so that we can reference them in the app, starting with the Google Play Services Library. You should find it at /extras/google/google_play_services/libproject/google-play-services_lib/ in your SDK folder. Make a copy of the library and paste it in another location on your computer.
Back in Eclipse, import the library by choosing Import >Android >Import Existing Android Code into Workspace from the File menu. Browse to the location you copied the library into, select the library, and import it. The library should appear as a project in your Eclipse Package Explorer and workspace.
Right-click the library project in Eclipse, select Properties and browse to the Android section. Select a Google APIs build target and ensure Is Library is checked. The library should now be ready to reference in your app.
Step 4
Let's now get the BaseGameUtils resource into your IDE as well. Download it from the Sample Games section of Google's developer portal. Since the code is hosted on GitHub, you can browse it and access its guides on GitHub.
Import the BaseGameUtils resource into Eclipse using the same technique you used for the Play Services Library by selecting Import > Android > Import Existing Android Code into Workspace from the File menu. Right-click the BaseGameUtils project in your Package Explorer and make sure Is Library is checked.
We can now reference both the Google Play Services Library and BaseGameUtils resources in our app.
Step 5
Select your game app in the Eclipse Package Explorer, right-click it, and choose Properties as you did for the imported resources. In the Android section, this time click Add in the Library area. Select both Google Play Services library and BaseGameUtils to add as libraries to your project.
That's the IDE set up for developing with Games Services.
2. Prepare Your Game in the Developer Console
Step 1
In order to use achievements in your game, your need to add the game to the Google Play Developer Console. Log into the Developer Console, click the Games Services button to the left of the console, and choose Set up Google Play game services if you haven't used them before.
Click to add a new game, select I don't use any Google APIs in my game yet, and choose a name and category for your game. Click Continueto go to the next step.
In the Game Details section, all you need to add to test your app is your game's title.
Step 2
Click Linked Apps to the left of your game listing in the Developer Console. Select Android from the Linked Apps list.
Enter your app details, including the package name you chose when you created it.
Click Save and continue at the top and choose Authorize your app now. You will be prompted to enter branding information. All you need for the moment is your app's name. In the Client ID Settings screen, select Installed application as the type, Android as the installed application type, and enter your package name.
You then need to generate a signing certificate fingerprint for authorization. You will need to run the keytool utility on your computer to do this. Open a terminal or command prompt and use the following command, but make sure to alter it to suit the location if necessary. You can use the debug certificate while testing.
The keytool should write out the certificate fingerprint. Select and copy what appears after SHA1 and paste it into the Developer Console under Signing Certificate Fingerprint. Click Create Client and copy the application ID you see in the listing for your game in the Developer Console, which should be displayed next to the game name at the top of the page. Save the application ID for use later in your app.
3. Create an Achievement
Step 1
Still in the Developer Console, click the Achievements button on the left of the game listing and click Add achievement.
Before you continue, you may want to check out the Achievements page on the Developer Guide to ensure you understand the concept of an achievement in Google Play Games. Enter a name, a description, and an icon for your achievement and choose a state, points, and list order. For our sample game, we use Guessed Correctly as the name, Picked a correct number as the description, and a simple star image as the icon. Click Save to save the achievement.
Copy the achievement ID, which you can see next to the achievement in the Developer Console.
Step 2
If you navigate to the Testing section for your game, you can set email addresses for people who will have test access to the game. By default, the Developer Console will insert your own Google account email, so you should be able to use that straight away. Add any other test emails you need, then you can log out of your Google account.
4. Prepare Your Game for Accessing Games Services
Step 1
In Eclipse, we can get the app ready to access Games Services. We are going to use the technique outlined in Implementing Sign-in on Android to handle getting users signed in and out of their Google accounts. This will involve using buttons to sign in and out, so add these to your app's layout as follows:
Alter your main Activity to extend BaseGameActivity. This will let us automate certain parts of the sign-in process for your users. Also make the Activity class handle clicks:
public class MainActivity extends BaseGameActivity implements View.OnClickListener
We will respond to button taps in onClick as you can see below:
@Override
public void onClick(View view) {
if (view.getId() == R.id.sign_in_button) {
beginUserInitiatedSignIn();
}
else if (view.getId() == R.id.sign_out_button) {
signOut();
findViewById(R.id.sign_in_button).setVisibility(View.VISIBLE);
findViewById(R.id.sign_out_button).setVisibility(View.GONE);
}
}
We use methods from the BaseGameActivity class we are extending to handle sign-in (beginUserInitiatedSignIn and signOut), updating the user interface accordingly. When the app starts, it will attempt to automatically log in the user, but they will also be able to use the buttons to sign in and out.
We now need to add two callbacks to our Activity class:
public void onSignInSucceeded() {
findViewById(R.id.sign_in_button).setVisibility(View.GONE);
findViewById(R.id.sign_out_button).setVisibility(View.VISIBLE);
}
@Override
public void onSignInFailed() {
findViewById(R.id.sign_in_button).setVisibility(View.VISIBLE);
findViewById(R.id.sign_out_button).setVisibility(View.GONE);
}
You could add more code to these if necessary. You may also choose to save player progress even if they are not signed in, but this depends on your game. In the sample application, we take the simple approach of checking that we have a connection to Google Services before we attempt to work with the achievement.
Step 3
Before you start coding the detail of using achievements in your app, you need to add some data to it. Start by opening or creating your res/values/ids.xml file and add string resources for the app and achievement IDs you copied from the Developer Console:
We reference the app ID we added to the ids file and the Play Services version. This is all you need to get started coding with your achievement.
5. Implement Your Achievement
Step 1
All that remains now is for you to unlock the achievement when the game player meets the achievement's requirements. Naturally this will depend on the purpose of your own game, but if you want to carry out the process using the sample app of this tutorial, then you can use the following code. We start with the main layout, which includes the sign-in and sign-out buttons we added earlier:
Notice that we also include an Achievements button next to the buttons for signing in and out. We will implement that button later. We won't go into too much detail on the sample game, if you have completed even simple apps before, this shouldn't be too difficult.
The game selects a random number between 0 and 9, and the player can choose a number button to make a guess. The game updates the text field to reflect whether or not the user guessed correctly. If a correct guess is made, the achievement is unlocked.
Step 2
Switch back to your Activity class and add the following instance variables:
Now add the method we set as onClick attribute for the buttons:
public void btnPressed(View v){
int btn = Integer.parseInt(v.getTag().toString());
if(btn<0){
//again btn
number=rand.nextInt(10);
enableNumbers();
info.setText("Guess the number!");
}
else{
//number button
if(btn==number){
info.setText("Yes! It was "+number);
if(getApiClient().isConnected())
Games.Achievements.unlock(getApiClient(),
getString(R.string.correct_guess_achievement));
}
else{
info.setText("No! It was "+number);
}
disableNumbers();
}
}
We call the Games Achievements utility to unlock the achievement if the user's guess was correct, first checking that we have a connection. We refer to the achievement using the string resource we created.
Last but not least, let's allow the user to view their achievements for the game. This will happen when they click the Achievements button we added. Extend the code in the onClick method, adding an additional else if:
else if (view.getId() == R.id.show_achievements){
startActivityForResult(Games.Achievements.getAchievementsIntent(
getApiClient()), 1);
}
We use the getAchievementsIntent method with an arbitrary integer to display the user achievements within the game. Hitting the back button will bring the player back to the game. For more on the achievements methods, see the Achievements in Android page on the Developer Guide.
Step 3
You should be able to run your app at this point. When the app runs, it will start the user authorization and sign-in process, prompting the user to grant the necessary permissions.
Once signed in, the user will see a confirmation.
The user can choose to sign out and back in whenever they like. When the user selects a correct number, the achievement is unlocked and is displayed on top of the game screen.
The player can then continue as normal. Clicking the Achievements button will display their achievements. Tapping the back button will bring the player back to the game.
Conclusion
In this tutorial, we have explored a practical example of using achievements in an Android application. The sample app we used is simple, but the same principles apply to any game you are working with. Try adding another achievement to the Developer Console and implement it in your game to ensure you understand the concepts and processes.
The Core Data framework has been around for many years. It's used in thousands of applications and by millions of people, both on iOS and OS X. Core Data is maintained by Apple and very well documented. It's a mature framework that has proven it's value over and over.
Core Data takes advantage of the Objective-C language and its runtime, and neatly integrates with the Core Foundation framework. The result is an easy to use framework for managing an object graph that is elegant to use and incredibly efficient in terms of memory usage.
1. Prerequisites
Even though the Core Data framework isn't difficult per se, if you're new to iOS or OS X development, then I recommend you first go through our series about iOS development. It teaches you the fundamentals of iOS development and, at the end of the series, you will have enough knowledge to take on more complex topics, such as Core Data.
As I said, Core Data isn't as complex or difficult to pick up as most developers think. However, I've learned that a solid foundation is critical to get up to speed with Core Data. You need to have a proper understanding of the Core Data API to avoid bad practices and make sure you don't run into problems using the framework.
Every component of the Core Data framework has a specific purpose and function. If you try to use Core Data in a way it wasn't designed for, you will inevitably end up struggling with the framework.
What I cover in this series on Core Data is applicable to iOS 6+ and OS X 10.8+, but the focus will be on iOS. In this series, I will work with Xcode 5 and the iOS 7 SDK.
2. Learning Curve
The Core Data framework can seem daunting at first, but the API is intuitive and concise once you understand how the various pieces of the puzzle fit together. And that's exactly where most developers run into problems. They try to use Core Data before they've seen that proverbial puzzle, they don't know how the pieces of the puzzle fit together and relate to one another.
In this article, I will help you become familiar with the so-called Core Data stack. Once you understand the key players of the Core Data stack, the framework will feel less daunting and you will even start to enjoy and appreciate the framework's well-crafted API.
In contrast to frameworks like UIKit, which you can use without understanding the framework in its entirety, Core Data demands a proper understanding of its building blocks. It's important to set aside some time to become familiar with the framework, which we'll do in this tutorial.
3. What is Core Data?
Developers new to the Core Data framework often confuse it with and expect it to work as a database. If there's one thing I hope you'll remember from this series, it is that Core Data isn't a database and you shouldn't expect it to act like one.
What is Core Data if it isn't a database? Core Data is the model layer of your application in the broadest sense possible. It's the Model in the Model-View-Controller pattern permeates the iOS SDK.
Core Data isn't the database of your application nor is it an API for persisting data to a database. Core Data is a framework that manages an object graph. It's as simple as that. Core Data can persist that object graph by writing it to disk, but that is not the primary goal of the framework.
4. Core Data Stack
As I mentioned earlier, the Core Data stack is the heart of Core Data. It's a collection of objects that make Core Data tick. The key objects of the stack are the managed object model, the persistent store coordinator, and one or more managed object contexts. Let's start by taking a quick look at each component.
NSManagedObjectModel
The managed object model is the data model of the application. Even though Core Data isn't a database, you can compare the managed object model to the schema of a database, that is, it contains information about the models or entities of the object graph, what attributes they have, and how they relate to one another.
The NSManagedObjectModel object knows about the data model by loading one or more data model files during its initialization. We'll take a look at how this works in a few moments.
NSPersistentStoreCoordinator
As its name indicates, the NSPersistentStoreCoordinator object persists data to disk and ensures the persistent store(s) and the data model are compatible. It mediates between the persistent store(s) and the managed object context(s) and also takes care of loading and caching data. That's right. Core Data has caching built in.
The persistent store coordinator is the conductor of the Core Data orchestra. Despite its important role in the Core Data stack, we rarely interact with it directly.
NSManagedObjectContext
The NSManagedObjectContext object manages a collection of model objects, instances of the NSManagedObject class. It's perfectly possible to have multiple managed object contexts. Each managed object context is backed by a persistent store coordinator.
You can see a managed object context as a workbench on which you work with your model objects. You load them, you manipulate them, and save them on that workbench. Loading and saving are mediated by the persistent store coordinator. You can have multiple workbenches, which is useful if your application is multithreaded, for example.
While a managed object model and persistent store coordinator can be shared across threads, managed object contexts should never be accessed from a thread different than the one they were created on. We'll discuss multithreading in more detail later in this series.
5. Exploring the Core Data Stack
Step 1: Xcode Template
Let's explore the Core Data stack in more detail by taking a look at Apple's Xcode template for Core Data. Create a new project in Xcode 5 by selecting New > Project... from the File menu. Choose the Empty Application template from the list of iOS Application templates on the left.
Name the project Core Data, set Devices to iPhone, and check the checkbox labeled Use Core Data. Tell Xcode where you'd like to store the project files and hit Create.
Step 2: Overview
By default, Apple puts code related to Core Data in the application's delegate class, the TSPAppDelegate class in our example. Let's start by exploring the interface of TSPAppDelegate.
As you can see, the application delegate has a property for each component of the Core Data stack as well as two convenience methods, saveContext and applicationDocumentsDirectory.
Note that the Core Data properties are marked as readonly, which means that the instances cannot be modified by objects other than the application delegate itself.
The implementation of TSPAppDelegate is much more interesting and will show us how the managed object model, the persistent store coordinator, and the managed object context work together. Let's start from the top.
Because the properties in the interface of the TSPAppDelegate class are declared as readonly, no setter methods are created. The first @synthesize directive tells the compiler to associate the _managedObjectContext instance variable with the managedObjectContext property we declared in the interface of the class. This is a common pattern to lazily load objects.
You can accomplish the same result by using a private class extension in the class's implementation file. Take a look at the following code snippet. The @synthesize directives are not needed if you use a private class extension.
Even though I usually use and prefer a private class extension, we'll stick with Apple's template for this tutorial.
Setting up the Core Data stack is actually pretty straightforward in terms of the methods that need to be implemented. Apple doesn't use special setup methods to create the Core Data stack. The three key objects of the Core Data stack are created when they are needed. In other words, they are lazily loaded or instantiated.
In practice this means that, the implementation of the TSPAppDelegate class looks similar to what you'd expect in an application delegate class with the exception of the saveContext and applicationDocumentsDirectory methods, and the getter methods of managedObjectContext, managedObjectModel, and persistentStoreCoordinator. It's in these getter methods that the magic happens. That is one of the beauties of Core Data, the setup is very simple and interacting with Core Data is just as easy.
Step 3: Managed Object Context
The class you'll use most often, apart from NSManagedObject, when interacting with Core Data is NSManagedObjectContext. Let's start by exploring its getter.
The first three lines of its implementation are typical for a getter that lazily loads the instance variable. If the NSManagedObjectContext object isn't nil, it returns the object. The interesting bit is the actual instantiation of the NSManagedObjectContext object.
We first grab a reference to the persistence store coordinator by calling its getter method. The persistent store coordinator is also lazily loaded as we'll see in a moment. If the persistent store coordinator isn't nil, we create an NSManagedObjectContext instance and set its persistentStoreCoordinator property to the persistent store coordinator. That wasn't too difficult. Was it?
To sum up, the managed object context manages a collection of model objects, instances of the NSManagedObject class, and keeps a reference to a persistent store coordinator. Keep this in mind while reading the rest of this article.
Step 4: Persistent Store Coordinator
As we saw a moment ago, the persistentStoreCoordinator method is invoked by the managedObjectContext method. Take a look at the implementation of persistentStoreCoordinator, but don't let it scare you. It's actually not that complicated.
- (NSPersistentStoreCoordinator *)persistentStoreCoordinator
{
if (_persistentStoreCoordinator != nil) {
return _persistentStoreCoordinator;
}
NSURL *storeURL = [[self applicationDocumentsDirectory] URLByAppendingPathComponent:@"Core_Data.sqlite"];
NSError *error = nil;
_persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]];
if (![_persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeURL options:nil error:&error]) {
/*
Replace this implementation with code to handle the error appropriately.
abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development.
Typical reasons for an error here include:
* The persistent store is not accessible;
* The schema for the persistent store is incompatible with current managed object model.
Check the error message to determine what the actual problem was.
If the persistent store is not accessible, there is typically something wrong with the file path. Often, a file URL is pointing into the application's resources directory instead of a writeable directory.
If you encounter schema incompatibility errors during development, you can reduce their frequency by:
* Simply deleting the existing store:
[[NSFileManager defaultManager] removeItemAtURL:storeURL error:nil]
* Performing automatic lightweight migration by passing the following dictionary as the options parameter:
@{NSMigratePersistentStoresAutomaticallyOption:@YES, NSInferMappingModelAutomaticallyOption:@YES}
Lightweight migration will only work for a limited set of schema changes; consult "Core Data Model Versioning and Data Migration Programming Guide" for details.
*/
NSLog(@"Unresolved error %@, %@", error, [error userInfo]);
abort();
}
return _persistentStoreCoordinator;
}
You will almost always want to store Core Data's object graph to disk and Apple's Xcode template uses a SQLite database to accomplish this.
When we create the persistent store coordinator in persistentStoreCoordinator, we specify the location of the store on disk. We start by creating an NSURL object that points to that location in the application's sandbox. We invoke applicationDocumentsDirectory, a helper method, which returns the location, an NSURL object, of the Documents directory in the application's sandbox. We append Core_Data.sqlite to the location and store it in storeURL for later use.
By default, the name of the store on disk is the same as the name of the project. You can change this to whatever you want though.
As I mentioned a moment ago, the .sqlite extension hints that the store on disk is a SQLite database. Even though Core Data supports several store types, SQLite is by far the most used store type because of its speed and reliability.
In the next step, we instantiate the persistent store coordinator by invoking initWithManagedObjectModel: and passing a NSManagedObjectModel instance. We grab a reference to the managed object model by invoking the managedObjectModel method, which we'll explore next.
We now have an instance of the NSPersistentStoreCoordinator class, but there are no stores associated with it yet. We add a store to the persistent store coordinator by calling a pretty impressive method on it, addPersistentStoreWithType:configuration:URL:options:error:.
if (![_persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeURL options:nil error:&error]) {
}
The first argument specifies the store type, NSSQLiteStoreType in this example. Core Data also supports binary stores (NSBinaryStoreType) and an in memory store (NSInMemoryStoreType).
The second argument tells Core Data which configuration to use for the persistent store. We pass in nil, which tells Core Data to use the default configuration. The third argument is the location of the store, which is stored in storeURL.
The fourth argument is an NSDictionary of options that let's us alter the behavior of the persistent store. We'll revisit this aspect later in this series and pass in nil for now. The last argument is a reference to an NSError pointer.
If no errors pop up, this method returns an NSPersistentStore object. We don't keep a reference to the persistent store, because we don't need to interact with it once it's added to the persistent store coordinator.
If adding the persistent store fails, though, it means that there's a problem with the persistent store of the application and we need to take the necessary steps to resolve the problem. When this happens and why it happens is the subject of a future installment.
At the moment, abort is invoked when addPersistentStoreWithType:configuration:URL:options:error: returns nil. As the comments in the if statement explains, you should never call abort in a production environment, because it crashes the application. We will remedy this later in this series.
Step 5: Managed Object Model
The third and final piece of the puzzle is the managed object model. Let's take a look at the getter of the managedObjectModel property.
The implementation is very easy. We store the location of the application's model in modelURL and pass modelURL to initWithContentsOfURL: to create an instance of the NSManagedObjectModel class.
At this point, you're probably wondering what that model is modelURL is pointing to and what the file with the .momd extension is. To answer these questions, we need to find out what else Xcode has created for us during the project's setup.
In the Project Navigator on the left, you should see a file named Core_Data.xcdatamodeld. This is the data model of the application that's compiled to a .momd file. It's that .momd file that the managed object model uses to create the application's data model.
It is possible to have several data model files. The NSManagedObjectModel class is perfectly capable of merging multiple data models into one, that is one of the more powerful and advances features of Core Data.
The Core Data framework also supports data model versioning as well as migrations. This ensures that the data stored in the persistent store(s) doesn't get corrupted. We will cover versioning and migrations later in this series.
The data model file in our project is empty at the moment, which means that our data model contains no entities. We'll remedy this in the next tutorial that will focus exclusively on the data model.
6. Putting It All Together
Before we wrap up this article, I'd like to show you a diagram that illustrates the three components of the Core Data stack.
The above diagram is a visual representation of what we explored in the Xcode template a moment ago. The NSPersistentStoreCoordinator object is the brain of the Core Data stack of the application. It talks to one or more persistent stores and makes sure data is saved, loaded, and cached.
The persistent store coordinator knows about the data model, the schema of the object graph if you like, through the NSManagedObjectModel object. The managed object model creates the application's data model from one or more .momd files, binary representations of the data model.
Last but not least, the application accesses the object graph through one or more instances of the NSManagedObjectContext class. A managed object context knows about the data model through the persistent store coordinator, but it doesn't know or keep a reference to the managed object model. There is no need for that connection.
The managed object context asks the persistent coordinator for data and tells it to save data when necessary. This is all done for you by the Core Data framework and your application rarely needs to talk to the persistent store coordinator directly.
Conclusion
In this article, we covered the key players of the Core Data stack, the persistent store coordinator, the managed object model, and the managed object context. Make sure you understand the role of each component and, more importantly, how they work together to make Core Data do its magic.
In the next installment of this series on Core Data, we dive into the data model. We take a look at the data model editor in Xcode 5 and we create a few entities, attributes, and relationships.
The majority of modern mobile devices are equipped with accelerometers, gyroscopes, and compasses. In my previous article about the Geolocation API, I described how developers can use the data offered by the Geolocation API to improve the user experience. Another interesting API is the Device Orientation API, which is the focus on this tutorial.
Detecting the orientation of a device is useful for a wide range of applications, from navigation application to games. Have you ever played a racing game on a mobile device that lets you control the car by tilting the device?
Another application of the API is updating the user interface of an application when the orientation of the device changes to offer the user the best possible experience by taking advantage of the entire screen. If you're a fan of YouTube, then you most certainly have taken advantage of this feature.
In this article, I'll introduce you to the Device Orientation API, explaining what type of data it can offer us and how to leverage it in your applications.
1. What is it?
To quote the W3C specification of the Device Orientation API the API "[...] defines several new DOM events that provide information about the physical orientation and motion of a hosting device." The data provided by the API is obtained from various sources, such as the device's gyroscope, the accelerometer, and the compass. This differs from device to device, depending on which sensors are available.
This API is a W3C Working Draft, which means the specification isn't stable and we may expect some changes in the future. It's also worth noting that this API has some known inconsistencies in several browsers and on a number of operating systems. For example, the implementation on Chrome and Opera, based on the Blink rendering engine, have a compatibility issue with Windows 8 for the deviceorientation event. Another example is that the interval property is not constant in Opera Mobile.
2. Implementation
The API exposes three events that provide information about the orientation of the device:
deviceorientation
devicemotion
compassneedscalibration
These events are fired on the window object, which means that we need to attach a handler to the window object. Let's take a look at each of these events.
deviceorientation
The deviceorientation event is fired when the accelerometer detects a change of the device orientation. As I mentioned earlier, we can listen for this event and respond to any changes by attaching an event handler to the window object. When the event handler is invoked, it will receive one argument of type DeviceOrientationEvent, which contains four properties:
alpha is the angle around the z-axis. Its value ranges from 0 to 360 degrees. When the top of the device points to the True North, the value of this property is 0.
beta is the angle around the x-axis. Its value range from -180 to 180 degrees. When the device is parallel to surface of the Earth, the value of this property is 0.
gamma is the angle around the y-axis. Its values ranges from -90 to 90 degrees. When the device is parallel to the surface of the Earth, the value of this property is 0.
absolute specifies whether the device is providing orientation data that's relative to the Earth's coordinate system, in which case its value is true, or to an arbitrary coordinate system.
The following illustration, taken from the official specification, shows the x, y, and z axes mentioned relative to the device.
devicemotion
The devicemotion event is fired every time the device accelerates or decelerates. You can listen for this event just as we did for the deviceorientation event. When the event handler is invoked, it receives one argument of type DeviceMotionEvent, which has four properties:
acceleration specifies the acceleration of the device relative to the Earth frame on the x, y, and z axes, accessible through its x, y, and z properties. The values are expressed in m/s2.
accelerationIncludingGravity holds the same values as the acceleration property, but it takes Earth's gravity into account. The values of this property should be used in situations where the device's hardware doesn't know how to remove gravity from the acceleration data. In fact, in such cases the acceleration property should not be provided by the user agent.
rotationRate specifies the rate at which the device is rotating around each of its axes in degrees per second. We can access the individual values of rotationRate through its alpha, beta, and gamma properties.
interval provides the interval at which data is obtained. This value must not change once it's set. It is expressed in milliseconds.
compassneedscalibration
This event is fired when the user agent determines the compass requires calibration. The specification also states that "user agents should only fire the event if calibrating the compass will increase the accuracy of the data provided by the deviceorientation event." This event should be used to inform the user that the compass needs calibration and it should also instruct the user how to do this.
3. Detecting Support
To detect whether the browser or user agent supports one of the first two events, deviceorientation and devicemotion, is as simple as including a trivial conditional statement. Take a look at the following code snippet in which we detect support for the deviceorientation event:
if (window.DeviceOrientationEvent) {
// We can listen for change in the device's orientation...
} else {
// Not supported
}
To test for the compassneedscalibration event, we use the following code snippet:
if (!('oncompassneedscalibration' in window)) {
// Event supported
} else {
// Event not supported
}
4. Browser Support
Even though support for the Device Orientation API is good, we need to keep a few things in mind when working with the API. Apart from the caveats mentioned in the introduction, the absolute property is undefined in Mobile Safari.
However, the real problem is that every browser that supports the Device Orientation API only supports it partially. In fact, at the time of writing, very few browsers support the compassneedscalibration event. Execute the above code snippet in Chrome or Firefox to illustrate the problem.
With this in mind, the browsers that support the Device Orientation API are Chrome 7+, Firefox 6+, Opera 15+, and Internet Explorer 11. Support by mobile browsers is even better. In addition to the ones I've already mentioned, the API is also supported by the browser of BlackBerry 10, Opera Mobile 12+, Mobile Safari 4.2+, and Chrome 3+ on Android.
For an up to date and accurate picture of support for the Device Orientation API, I recommend visiting Can I use....
5. Demo
We now know what we need to create a demo application that leverages the Device Orientation API. The purpose of this demo is to create a cube, using plain HTML and CSS, and rotate it as the device's orientation changes.
We'll also display the information we retrieve from the API, which shows the type of data we get back from the Device Orientation API. We also show the information in raw text as some browsers may support the Device Orientation API but not the CSS properties to render the cube. This is the case for Opera Mobile, for example.
Because we know that not every browser supports the API, we also test for support of every feature of the API and display this to the user.
The source code for the demo application is shown below, but you can also see it in action.
In this article we've explored the Device Orientation API by taking a look at its features and potential use cases for it. Support for the API isn't great at the time of writing, but I'm sure you agree it opens up a lot of possibilities for mobile developers, game developers in particular. Don't forget to play with the demo to see the API in action.
2014-05-21T15:50:01.257Z2014-05-21T15:50:01.257ZAurelio De Rosahttp://code.tutsplus.com/tutorials/an-introduction-to-the-device-orientation-api--cms-21067