In this tutorial, you will learn how to use the JobScheduler API available in Android Lollipop. The JobScheduler API allows developers to create jobs that execute in the background when certain conditions are met.
Introduction
When working with Android, there will be occasions where you will want to run a task at a later point in time or under certain conditions, such as when a device is plugged into a power source or connected to a Wi-Fi network. Thankfully with API 21, known by most people as Android Lollipop, Google has provided a new component known as the JobScheduler API to handle this very scenario.
The JobScheduler API performs an operation for your application when a set of predefined conditions are met. Unlike the AlarmManager class, the timing isn't exact. In addition, the JobScheduler API is able to batch various jobs to run together. This allows your app to perform the given task while being considerate of the device's battery at the cost of timing control.
In this article, you will learn more about the JobScheduler API and the JobService class by using them to run a simple background task in an Android application. The code for this tutorial is available on GitHub.
1. Creating the Job Service
To start, you're going to want to create a new Android project with a minimum required API of 21, because the JobScheduler API was added in the most recent version of Android and, at the time of writing, is not backwards compatible through a support library.
Assuming you're using Android Studio, after you've hit the finished button for the new project, you should have a bare-bones "Hello World" application. The first step you're going to take with this project is to create a new Java class. To keep things simple, let's name it JobSchedulerService and extend the JobService class, which requires that two methods be created onStartJob(JobParameters params) and onStopJob(JobParameters params).
public class JobSchedulerService extends JobService {
@Override
public boolean onStartJob(JobParameters params) {
return false;
}
@Override
public boolean onStopJob(JobParameters params) {
return false;
}
}
onStartJob(JobParameters params) is the method that you must use when you begin your task, because it is what the system uses to trigger jobs that have already been scheduled. As you can see, the method returns a boolean value. If the return value is false, the system assumes that whatever task has run did not take long and is done by the time the method returns. If the return value is true, then the system assumes that the task is going to take some time and the burden falls on you, the developer, to tell the system when the given task is complete by calling jobFinished(JobParameters params, boolean needsRescheduled).
onStopJob(JobParameters params) is used by the system to cancel pending tasks when a cancel request is received. It's important to note that if onStartJob(JobParameters params) returns false, the system assumes there are no jobs currently running when a cancel request is received. In other words, it simply won't call onStopJob(JobParameters params).
One thing to note is that the job service runs on your application's main thread. This means that you have to use another thread, a handler, or an asynchronous task to run longer tasks to not block the main thread. Because multithreading techniques are beyond the scope of this tutorial, let's keep it simple and implement a handler to run our task in the JobSchedulerService class.
In the handler, you implement the handleMessage(Message msg) method that is a part of Handler instance and have it run your task's logic. In this case, we're keeping things very simple and post a Toast message from the application, though this is where you would put your logic for things like syncing data.
When the task is done, you need to call jobFinished(JobParameters params, boolean needsRescheduled) to let the system know that you're done with that task and that it can begin queuing up the next operation. If you don't do this, your jobs will only run once and your application will not be allowed to perform additional jobs.
The two parameters that jobFinished(JobParameters params, boolean needsRescheduled) takes are the JobParameters that were passed to the JobService class in the onStartJob(JobParameters params) method and a boolean value that lets the system know if it should reschedule the job based on the original requirements of the job. This boolean value is useful to understand, because it is how you handle the situations where your task is unable to complete because of other issues, such as a failed network call.
With the Handler instance created, you can go ahead and start implementing the onStartJob(JobParameters params) and onStopJob(JobParameters params) methods to control your tasks. You'll notice that in the following code snippet, the onStartJob(JobParameters params) method returns true. This is because you're going to use a Handler instance to control your operation, which means that it could take longer to finish than the onStartJob(JobParameters params) method. By returning true, you're letting the application know that you will manually call the jobFinished(JobParameters params, boolean needsRescheduled) method. You'll also notice that the number 1 is being passed to the Handler instance. This is the identifier that you're going to use for referencing the job.
Once you're done with the Java portion of the JobSchedulerServiceclass, you need to go into AndroidManifest.xml and add a node for the service so that your application has permission to bind and use this class as a JobService.
With JobSchedulerServiceclass finished, we can start looking at how your application will interact with the JobScheduler API. The first thing you will need to do is create a JobScheduler object, called mJobScheduler in the sample code, and initialize it by getting an instance of the system service JOB_SCHEDULER_SERVICE. In the sample application, this is done in the MainActivityclass.
When you want to create your scheduled task, you can use the JobInfo.Builder to construct a JobInfo object that gets passed to your service. To create a JobInfo object, JobInfo.Builder accepts two parameters. The first is the identifier of the job that you will run and the second is the component name of the service that you will use with the JobScheduler API.
JobInfo.Builder builder = new JobInfo.Builder( 1,
new ComponentName( getPackageName(),
JobSchedulerService.class.getName() ) );
This builder allows you to set many different options for controlling when your job will execute. The following code snippet shows how you could set your task to run periodically every three seconds.
builder.setPeriodic( 3000 );
Other methods include:
setMinimumLatency(long minLatencyMillis): This makes your job not launch until the stated number of milliseconds have passed. This is incompatible with setPeriodic(long time) and will cause an exception to be thrown if they are both used.
setOverrideDeadline(long maxExecutionDelayMillis): This will set a deadline for your job. Even if other requirements are not met, your task will start approximately when the stated time has passed. Like setMinimumLatency(long time), this function is mutually exclusive with setPeriodic(long time) and will cause an exception to be thrown if they are both used.
setPersisted(boolean isPersisted): This function tells the system whether your task should continue to exist after the device has been rebooted.
setRequiredNetworkType(int networkType): This function will tell your job that it can only start if the device is on a specific kind of network. The default is JobInfo.NETWORK_TYPE_NONE, meaning that the task can run whether there is network connectivity or not. The other two available types are JobInfo.NETWORK_TYPE_ANY, which requires some type of network connection available for the job to run, and JobInfo.NETWORK_TYPE_UNMETERED, which requires that the device be on a non-cellular network.
setRequiresCharging(boolean requiresCharging): Using this function will tell your application that the job should not start until the device has started charging.
setRequiresDeviceIdle(boolean requiresDeviceIdle): This tells your job to not start unless the user is not using their device and they have not used it for some time.
It's important to note that setRequiredNetworkType(int networkType), setRequiresCharging(boolean requireCharging) and setRequiresDeviceIdle(boolean requireIdle) may cause your job to never start unless setOverrideDeadline(long time) is also set, allowing your job to run even if conditions are not met. Once the preferred conditions are stated, you can build the JobInfo object and send it to your JobScheduler object as shown below.
You'll notice that the schedule operation returns an integer. If schedule fails, it will return a value of zero or less, corresponding to an error code. Otherwise it will return the job identifier that we defined in the JobInfo.Builder.
If your application requires that you stop a specific or all jobs, you can do so by calling cancel(int jobId) or cancelAll() on the JobScheduler object.
mJobScheduler.cancelAll();
You should now be able to use the JobScheduler API with your own applications to batch jobs and run background operations.
Conclusion
In this article, you've learned how to implement a JobService subclass that uses a Handler object to run background tasks for your application. You've also learned how to use the JobInfo.Builder to set requirements for when your service should run. Using these, you should be able to improve how your own applications operate while being mindful of power consumption.
In this tutorial, you will learn how to use the JobScheduler API available in Android Lollipop. The JobScheduler API allows developers to create jobs that execute in the background when certain conditions are met.
Introduction
When working with Android, there will be occasions where you will want to run a task at a later point in time or under certain conditions, such as when a device is plugged into a power source or connected to a Wi-Fi network. Thankfully with API 21, known by most people as Android Lollipop, Google has provided a new component known as the JobScheduler API to handle this very scenario.
The JobScheduler API performs an operation for your application when a set of predefined conditions are met. Unlike the AlarmManager class, the timing isn't exact. In addition, the JobScheduler API is able to batch various jobs to run together. This allows your app to perform the given task while being considerate of the device's battery at the cost of timing control.
In this article, you will learn more about the JobScheduler API and the JobService class by using them to run a simple background task in an Android application. The code for this tutorial is available on GitHub.
1. Creating the Job Service
To start, you're going to want to create a new Android project with a minimum required API of 21, because the JobScheduler API was added in the most recent version of Android and, at the time of writing, is not backwards compatible through a support library.
Assuming you're using Android Studio, after you've hit the finished button for the new project, you should have a bare-bones "Hello World" application. The first step you're going to take with this project is to create a new Java class. To keep things simple, let's name it JobSchedulerService and extend the JobService class, which requires that two methods be created onStartJob(JobParameters params) and onStopJob(JobParameters params).
public class JobSchedulerService extends JobService {
@Override
public boolean onStartJob(JobParameters params) {
return false;
}
@Override
public boolean onStopJob(JobParameters params) {
return false;
}
}
onStartJob(JobParameters params) is the method that you must use when you begin your task, because it is what the system uses to trigger jobs that have already been scheduled. As you can see, the method returns a boolean value. If the return value is false, the system assumes that whatever task has run did not take long and is done by the time the method returns. If the return value is true, then the system assumes that the task is going to take some time and the burden falls on you, the developer, to tell the system when the given task is complete by calling jobFinished(JobParameters params, boolean needsRescheduled).
onStopJob(JobParameters params) is used by the system to cancel pending tasks when a cancel request is received. It's important to note that if onStartJob(JobParameters params) returns false, the system assumes there are no jobs currently running when a cancel request is received. In other words, it simply won't call onStopJob(JobParameters params).
One thing to note is that the job service runs on your application's main thread. This means that you have to use another thread, a handler, or an asynchronous task to run longer tasks to not block the main thread. Because multithreading techniques are beyond the scope of this tutorial, let's keep it simple and implement a handler to run our task in the JobSchedulerService class.
In the handler, you implement the handleMessage(Message msg) method that is a part of Handler instance and have it run your task's logic. In this case, we're keeping things very simple and post a Toast message from the application, though this is where you would put your logic for things like syncing data.
When the task is done, you need to call jobFinished(JobParameters params, boolean needsRescheduled) to let the system know that you're done with that task and that it can begin queuing up the next operation. If you don't do this, your jobs will only run once and your application will not be allowed to perform additional jobs.
The two parameters that jobFinished(JobParameters params, boolean needsRescheduled) takes are the JobParameters that were passed to the JobService class in the onStartJob(JobParameters params) method and a boolean value that lets the system know if it should reschedule the job based on the original requirements of the job. This boolean value is useful to understand, because it is how you handle the situations where your task is unable to complete because of other issues, such as a failed network call.
With the Handler instance created, you can go ahead and start implementing the onStartJob(JobParameters params) and onStopJob(JobParameters params) methods to control your tasks. You'll notice that in the following code snippet, the onStartJob(JobParameters params) method returns true. This is because you're going to use a Handler instance to control your operation, which means that it could take longer to finish than the onStartJob(JobParameters params) method. By returning true, you're letting the application know that you will manually call the jobFinished(JobParameters params, boolean needsRescheduled) method. You'll also notice that the number 1 is being passed to the Handler instance. This is the identifier that you're going to use for referencing the job.
Once you're done with the Java portion of the JobSchedulerServiceclass, you need to go into AndroidManifest.xml and add a node for the service so that your application has permission to bind and use this class as a JobService.
With JobSchedulerServiceclass finished, we can start looking at how your application will interact with the JobScheduler API. The first thing you will need to do is create a JobScheduler object, called mJobScheduler in the sample code, and initialize it by getting an instance of the system service JOB_SCHEDULER_SERVICE. In the sample application, this is done in the MainActivityclass.
When you want to create your scheduled task, you can use the JobInfo.Builder to construct a JobInfo object that gets passed to your service. To create a JobInfo object, JobInfo.Builder accepts two parameters. The first is the identifier of the job that you will run and the second is the component name of the service that you will use with the JobScheduler API.
JobInfo.Builder builder = new JobInfo.Builder( 1,
new ComponentName( getPackageName(),
JobSchedulerService.class.getName() ) );
This builder allows you to set many different options for controlling when your job will execute. The following code snippet shows how you could set your task to run periodically every three seconds.
builder.setPeriodic( 3000 );
Other methods include:
setMinimumLatency(long minLatencyMillis): This makes your job not launch until the stated number of milliseconds have passed. This is incompatible with setPeriodic(long time) and will cause an exception to be thrown if they are both used.
setOverrideDeadline(long maxExecutionDelayMillis): This will set a deadline for your job. Even if other requirements are not met, your task will start approximately when the stated time has passed. Like setMinimumLatency(long time), this function is mutually exclusive with setPeriodic(long time) and will cause an exception to be thrown if they are both used.
setPersisted(boolean isPersisted): This function tells the system whether your task should continue to exist after the device has been rebooted.
setRequiredNetworkType(int networkType): This function will tell your job that it can only start if the device is on a specific kind of network. The default is JobInfo.NETWORK_TYPE_NONE, meaning that the task can run whether there is network connectivity or not. The other two available types are JobInfo.NETWORK_TYPE_ANY, which requires some type of network connection available for the job to run, and JobInfo.NETWORK_TYPE_UNMETERED, which requires that the device be on a non-cellular network.
setRequiresCharging(boolean requiresCharging): Using this function will tell your application that the job should not start until the device has started charging.
setRequiresDeviceIdle(boolean requiresDeviceIdle): This tells your job to not start unless the user is not using their device and they have not used it for some time.
It's important to note that setRequiredNetworkType(int networkType), setRequiresCharging(boolean requireCharging) and setRequiresDeviceIdle(boolean requireIdle) may cause your job to never start unless setOverrideDeadline(long time) is also set, allowing your job to run even if conditions are not met. Once the preferred conditions are stated, you can build the JobInfo object and send it to your JobScheduler object as shown below.
You'll notice that the schedule operation returns an integer. If schedule fails, it will return a value of zero or less, corresponding to an error code. Otherwise it will return the job identifier that we defined in the JobInfo.Builder.
If your application requires that you stop a specific or all jobs, you can do so by calling cancel(int jobId) or cancelAll() on the JobScheduler object.
mJobScheduler.cancelAll();
You should now be able to use the JobScheduler API with your own applications to batch jobs and run background operations.
Conclusion
In this article, you've learned how to implement a JobService subclass that uses a Handler object to run background tasks for your application. You've also learned how to use the JobInfo.Builder to set requirements for when your service should run. Using these, you should be able to improve how your own applications operate while being mindful of power consumption.
Android's Media Effects framework allows developers to easily apply lots of impressive visual effects to photos and videos. As the framework uses the GPU to perform all its image processing operations, it can only accept OpenGL textures as its input. In this tutorial, you are going to learn how to use OpenGL ES 2.0 to convert a drawable resource into a texture and then use the framework to apply various effects to it.
Prerequisites
To follow this tutorial, you need to have:
an IDE that supports Android application development. If you don't have one, get the latest version of Android Studio from the Android Developer website.
a device that runs Android 4.0+ and has a GPU that supports OpenGL ES 2.0.
a basic understanding of OpenGL.
1. Setting Up the OpenGL ES Environment
Step 1: Create a GLSurfaceView
To display OpenGL graphics in your app, you have to use a GLSurfaceView object. Like any other View, you can add it to an Activity or Fragment by defining it in a layout XML file or by creating an instance of it in code.
In this tutorial, you are going to have a GLSurfaceView object as the only View in your Activity. Therefore, creating it in code is simpler. Once created, pass it to the setContentView method so that it fills the entire screen. Your Activity's onCreate method should look like this:
A GLSurfaceView.Renderer is responsible for drawing the contents of the GLSurfaceView.
Create a new class that implements the GLSurfaceView.Renderer interface. I am going to call this class EffectsRenderer. After adding a constructor and overriding all the methods of the interface, the class should look like this:
public class EffectsRenderer implements GLSurfaceView.Renderer {
public EffectsRenderer(Context context){
super();
}
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
}
@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
}
@Override
public void onDrawFrame(GL10 gl) {
}
}
Go back to your Activity and call the setRenderer method so that the GLSurfaceView uses the custom renderer.
view.setRenderer(new EffectsRenderer(this));
Step 3: Edit the Manifest
If you plan to publish your app on Google Play, add the following to AndroidManifest.xml:
This makes sure that your app can only be installed on devices that support OpenGL ES 2.0. The OpenGL environment is now ready.
2. Creating an OpenGL Plane
Step 1: Define Vertices
The GLSurfaceView cannot display a photo directly. The photo has to be converted into a texture and applied to an OpenGL shape first. In this tutorial, we will be creating a 2D plane that has four vertices. For the sake of simplicity, let's make it a square. Create a new class, Square, to represent the square.
public class Square {
}
The default OpenGL coordinate system has its origin at its center. As a result, the coordinates of the four corners of our square, whose sides are two units long, will be:
bottom left corner at (-1, -1)
bottom right corner at (1, -1)
top right corner at (1, 1)
top left corner at (-1, 1)
All the objects we draw using OpenGL should be made up of triangles. To draw the square, we need two triangles with a common edge. This means that the coordinates of the triangles will be:
triangle 1: (-1, -1), (1, -1), and (-1, 1) triangle 2: (1, -1), (-1, 1), and (1, 1)
To map the texture onto the square, you need to specify the coordinates of the vertices of the texture. Textures follow a coordinate system in which the value of the y-coordinate increases as you go higher. Create another array to represent the vertices of the texture.
Write the code to initialize these buffers in a new method called initializeBuffers. Use the ByteBuffer.allocateDirect method to create the buffer. Because a float uses 4 bytes, you need to multiply the size of the arrays with the value 4.
Next, use ByteBuffer.nativeOrder to determine the byte order of the underlying native platform, and set the order of the buffers to that value. Use the asFloatBuffer method to convert the ByteBuffer instance into a FloatBuffer. After the FloatBuffer is created, use the put method to load the array into the buffer. Finally, use the position method to make sure that the buffer is read from the beginning.
The contents of the initializeBuffers method should look like this:
It's time to write your own shaders. Shaders are nothing but simple C programs that are run by the GPU to process every individual vertex. For this tutorial, you have to create two shaders, a vertex shader and a fragment shader.
If you already know OpenGL, this code should be familiar to you because it is common across all platforms. If you don't, to understand these programs you must refer to the OpenGL documentation. Here's a brief explanation to get you started:
The vertex shader is responsible for drawing the individual vertices. aPosition is a variable that will be bound to the FloatBuffer that contains the coordinates of the vertices. Similarly, aTexPosition is a variable that will be be bound to the FloatBuffer that contains the coordinates of the texture. gl_Position is a built-in OpenGL variable and represents the position of each vertex. The vTexPosition is a varying variable, whose value is simply passed on to the fragment shader.
In this tutorial, the fragment shader is responsible for coloring the square. It picks up colors from the texture using the texture2D method and assigns them to the fragment using a built-in variable named gl_FragColor.
The shader code needs to be represented as String objects in the class.
Create a new method called initializeProgram to create an OpenGL program after compiling and linking the shaders.
Use glCreateShader to create a shader object and return a reference to it in the form of an int. To create a vertex shader, pass the value GL_VERTEX_SHADER to it. Similarly, to create a fragment shader, pass the value GL_FRAGMENT_SHADER to it. Next use glShaderSource to associate the appropriate shader code with the shader. Use glCompileShader to compile the shader code.
After compiling both shaders, create a new program using glCreateProgram. Just like glCreateShader, this too returns an int as a reference to the program. Call glAttachShader to attach the shaders to the program. Finally, call glLinkProgram to link the program.
Your method and the associated variables should look like this:
private int vertexShader;
private int fragmentShader;
private int program;
private void initializeProgram(){
vertexShader = GLES20.glCreateShader(GLES20.GL_VERTEX_SHADER);
GLES20.glShaderSource(vertexShader, vertexShaderCode);
GLES20.glCompileShader(vertexShader);
fragmentShader = GLES20.glCreateShader(GLES20.GL_FRAGMENT_SHADER);
GLES20.glShaderSource(fragmentShader, fragmentShaderCode);
GLES20.glCompileShader(fragmentShader);
program = GLES20.glCreateProgram();
GLES20.glAttachShader(program, vertexShader);
GLES20.glAttachShader(program, fragmentShader);
GLES20.glLinkProgram(program);
}
You might have noticed that the OpenGL methods (the methods prefixed with gl) belong to the class GLES20. This is because we are using OpenGL ES 2.0. If you wish to use a higher version, then you will have to use the classes GLES30 or GLES31.
Step 5: Draw the Square
Create a new method called draw to actually draw the square using the vertices and shaders we defined earlier.
Here's what you need to do in this method:
Use glBindFramebuffer to create a named frame buffer object (often called FBO).
Use glUseProgram to start using the program we just linked.
Pass the value GL_BLEND to glDisable to disable the blending of colors while rendering.
Use glGetAttribLocation to get a handle to the variables aPosition and aTexPosition mentioned in the vertex shader code.
Use glGetUniformLocation to get a handle to the constant uTexture mentioned in the fragment shader code.
Use the glVertexAttribPointer to associate the aPosition and aTexPosition handles with the verticesBuffer and the textureBuffer respectively.
Use glBindTexture to bind the texture (passed as an argument to the draw method) to the fragment shader.
Clear the contents of the GLSurfaceView using glClear.
Finally, use the glDrawArrays method to actually draw the two triangles (and thus the square).
The code for the draw method should look like this:
Add a constructor to the class to initialize the buffers and the program at the time of object creation.
public Square(){
initializeBuffers();
initializeProgram();
}
3. Rendering the OpenGL Plane and Texture
Currently, our renderer does nothing. We need to change that so that it can render the plane we created in the previous steps.
But first, let us create a Bitmap. Add any photo to your project's res/drawable folder. The file I am using is called forest.jpg. Use the BitmapFactory to convert the photo into a Bitmap object. Also, store the dimensions of the Bitmap object in separate variables.
Change the constructor of the EffectsRenderer class so that it has the following contents:
Create a new method called generateSquare to convert the bitmap into a texture and initialize a Square object. You will also need an array of integers to hold references to the OpenGL textures. Use glGenTextures to initialize the array and glBindTexture to activate the texture at index 0.
Next, use glTexParameteri to set various properties that decide how the texture is rendered:
Set GL_TEXTURE_MIN_FILTER (the minifying function) and the GL_TEXTURE_MAG_FILTER (the magnifying function) to GL_LINEAR to make sure that the texture looks smooth, even when it's stretched or shrunk.
Set GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T to GL_CLAMP_TO_EDGE so that the texture is never repeated.
Finally, use the texImage2D method to map the Bitmap to the texture. The implementation of the generateSquare method should look like this:
Whenever the dimensions of the GLSurfaceView change, the onSurfaceChanged method of the Renderer is called. Here's where you have to call glViewPort to specify the new dimensions of the viewport. Also, call glClearColor to paint the GLSurfaceView black. Next, call generateSquare to reinitialize the textures and the plane.
@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
GLES20.glViewport(0,0,width, height);
GLES20.glClearColor(0,0,0,1);
generateSquare();
}
Finally, call the Square object's draw method inside the onDrawFrame method of the Renderer.
@Override
public void onDrawFrame(GL10 gl) {
square.draw(textures[0]);
}
You can now run your app and see the photo you had chosen being rendered as an OpenGL texture on a plane.
4. Using the Media Effects Framework
The complex code we wrote until now was just a prerequisite to use the Media Effects framework. It's now time to start using the framework itself. Add the following fields to your Renderer class.
Initialize the effectContext field by using the EffectContext.createWithCurrentGlContext. It's responsible for managing the information about the visual effects inside an OpenGL context. To optimize performance, this should be called only once. Add the following code at the beginning of your onDrawFrame method.
Creating an effect is very simple. Use the effectContext to create an EffectFactory and use the EffectFactory to create an Effect object. Once an Effect object is available, you can call apply and pass a reference to the original texture to it, in our case it is textures[0], along with a reference to a blank texture object, in our case it is textures[1]. After the apply method is called, textures[1] will contain the result of the Effect.
For example, to create and apply the grayscale effect, here's the code you have to write:
Some effects take parameters. For instance, the brightness adjustment effect has a brightness parameter which takes a float value. You can use setParameter to change the value of any parameter. The following code shows you how to use it:
The effect will make your app render the following result:
Conclusion
In this tutorial, you have learned how to use the Media Effects Framework to apply various effects to your photos. While doing so, you also learned how to draw a plane using OpenGL ES 2.0 and apply various textures to it.
The framework can be applied to both photos and videos. In case of videos, you simply have to apply the effect to the individual frames of the video in the onDrawFrame method.
You have already seen three effects in this tutorial and the framework has dozens more for you to experiment with. To know more about them, refer to the Android Developer's website.
Android's Media Effects framework allows developers to easily apply lots of impressive visual effects to photos and videos. As the framework uses the GPU to perform all its image processing operations, it can only accept OpenGL textures as its input. In this tutorial, you are going to learn how to use OpenGL ES 2.0 to convert a drawable resource into a texture and then use the framework to apply various effects to it.
Prerequisites
To follow this tutorial, you need to have:
an IDE that supports Android application development. If you don't have one, get the latest version of Android Studio from the Android Developer website.
a device that runs Android 4.0+ and has a GPU that supports OpenGL ES 2.0.
a basic understanding of OpenGL.
1. Setting Up the OpenGL ES Environment
Step 1: Create a GLSurfaceView
To display OpenGL graphics in your app, you have to use a GLSurfaceView object. Like any other View, you can add it to an Activity or Fragment by defining it in a layout XML file or by creating an instance of it in code.
In this tutorial, you are going to have a GLSurfaceView object as the only View in your Activity. Therefore, creating it in code is simpler. Once created, pass it to the setContentView method so that it fills the entire screen. Your Activity's onCreate method should look like this:
A GLSurfaceView.Renderer is responsible for drawing the contents of the GLSurfaceView.
Create a new class that implements the GLSurfaceView.Renderer interface. I am going to call this class EffectsRenderer. After adding a constructor and overriding all the methods of the interface, the class should look like this:
public class EffectsRenderer implements GLSurfaceView.Renderer {
public EffectsRenderer(Context context){
super();
}
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
}
@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
}
@Override
public void onDrawFrame(GL10 gl) {
}
}
Go back to your Activity and call the setRenderer method so that the GLSurfaceView uses the custom renderer.
view.setRenderer(new EffectsRenderer(this));
Step 3: Edit the Manifest
If you plan to publish your app on Google Play, add the following to AndroidManifest.xml:
This makes sure that your app can only be installed on devices that support OpenGL ES 2.0. The OpenGL environment is now ready.
2. Creating an OpenGL Plane
Step 1: Define Vertices
The GLSurfaceView cannot display a photo directly. The photo has to be converted into a texture and applied to an OpenGL shape first. In this tutorial, we will be creating a 2D plane that has four vertices. For the sake of simplicity, let's make it a square. Create a new class, Square, to represent the square.
public class Square {
}
The default OpenGL coordinate system has its origin at its center. As a result, the coordinates of the four corners of our square, whose sides are two units long, will be:
bottom left corner at (-1, -1)
bottom right corner at (1, -1)
top right corner at (1, 1)
top left corner at (-1, 1)
All the objects we draw using OpenGL should be made up of triangles. To draw the square, we need two triangles with a common edge. This means that the coordinates of the triangles will be:
triangle 1: (-1, -1), (1, -1), and (-1, 1) triangle 2: (1, -1), (-1, 1), and (1, 1)
To map the texture onto the square, you need to specify the coordinates of the vertices of the texture. Textures follow a coordinate system in which the value of the y-coordinate increases as you go higher. Create another array to represent the vertices of the texture.
Write the code to initialize these buffers in a new method called initializeBuffers. Use the ByteBuffer.allocateDirect method to create the buffer. Because a float uses 4 bytes, you need to multiply the size of the arrays with the value 4.
Next, use ByteBuffer.nativeOrder to determine the byte order of the underlying native platform, and set the order of the buffers to that value. Use the asFloatBuffer method to convert the ByteBuffer instance into a FloatBuffer. After the FloatBuffer is created, use the put method to load the array into the buffer. Finally, use the position method to make sure that the buffer is read from the beginning.
The contents of the initializeBuffers method should look like this:
It's time to write your own shaders. Shaders are nothing but simple C programs that are run by the GPU to process every individual vertex. For this tutorial, you have to create two shaders, a vertex shader and a fragment shader.
If you already know OpenGL, this code should be familiar to you because it is common across all platforms. If you don't, to understand these programs you must refer to the OpenGL documentation. Here's a brief explanation to get you started:
The vertex shader is responsible for drawing the individual vertices. aPosition is a variable that will be bound to the FloatBuffer that contains the coordinates of the vertices. Similarly, aTexPosition is a variable that will be be bound to the FloatBuffer that contains the coordinates of the texture. gl_Position is a built-in OpenGL variable and represents the position of each vertex. The vTexPosition is a varying variable, whose value is simply passed on to the fragment shader.
In this tutorial, the fragment shader is responsible for coloring the square. It picks up colors from the texture using the texture2D method and assigns them to the fragment using a built-in variable named gl_FragColor.
The shader code needs to be represented as String objects in the class.
Create a new method called initializeProgram to create an OpenGL program after compiling and linking the shaders.
Use glCreateShader to create a shader object and return a reference to it in the form of an int. To create a vertex shader, pass the value GL_VERTEX_SHADER to it. Similarly, to create a fragment shader, pass the value GL_FRAGMENT_SHADER to it. Next use glShaderSource to associate the appropriate shader code with the shader. Use glCompileShader to compile the shader code.
After compiling both shaders, create a new program using glCreateProgram. Just like glCreateShader, this too returns an int as a reference to the program. Call glAttachShader to attach the shaders to the program. Finally, call glLinkProgram to link the program.
Your method and the associated variables should look like this:
private int vertexShader;
private int fragmentShader;
private int program;
private void initializeProgram(){
vertexShader = GLES20.glCreateShader(GLES20.GL_VERTEX_SHADER);
GLES20.glShaderSource(vertexShader, vertexShaderCode);
GLES20.glCompileShader(vertexShader);
fragmentShader = GLES20.glCreateShader(GLES20.GL_FRAGMENT_SHADER);
GLES20.glShaderSource(fragmentShader, fragmentShaderCode);
GLES20.glCompileShader(fragmentShader);
program = GLES20.glCreateProgram();
GLES20.glAttachShader(program, vertexShader);
GLES20.glAttachShader(program, fragmentShader);
GLES20.glLinkProgram(program);
}
You might have noticed that the OpenGL methods (the methods prefixed with gl) belong to the class GLES20. This is because we are using OpenGL ES 2.0. If you wish to use a higher version, then you will have to use the classes GLES30 or GLES31.
Step 5: Draw the Square
Create a new method called draw to actually draw the square using the vertices and shaders we defined earlier.
Here's what you need to do in this method:
Use glBindFramebuffer to create a named frame buffer object (often called FBO).
Use glUseProgram to start using the program we just linked.
Pass the value GL_BLEND to glDisable to disable the blending of colors while rendering.
Use glGetAttribLocation to get a handle to the variables aPosition and aTexPosition mentioned in the vertex shader code.
Use glGetUniformLocation to get a handle to the constant uTexture mentioned in the fragment shader code.
Use the glVertexAttribPointer to associate the aPosition and aTexPosition handles with the verticesBuffer and the textureBuffer respectively.
Use glBindTexture to bind the texture (passed as an argument to the draw method) to the fragment shader.
Clear the contents of the GLSurfaceView using glClear.
Finally, use the glDrawArrays method to actually draw the two triangles (and thus the square).
The code for the draw method should look like this:
Add a constructor to the class to initialize the buffers and the program at the time of object creation.
public Square(){
initializeBuffers();
initializeProgram();
}
3. Rendering the OpenGL Plane and Texture
Currently, our renderer does nothing. We need to change that so that it can render the plane we created in the previous steps.
But first, let us create a Bitmap. Add any photo to your project's res/drawable folder. The file I am using is called forest.jpg. Use the BitmapFactory to convert the photo into a Bitmap object. Also, store the dimensions of the Bitmap object in separate variables.
Change the constructor of the EffectsRenderer class so that it has the following contents:
Create a new method called generateSquare to convert the bitmap into a texture and initialize a Square object. You will also need an array of integers to hold references to the OpenGL textures. Use glGenTextures to initialize the array and glBindTexture to activate the texture at index 0.
Next, use glTexParameteri to set various properties that decide how the texture is rendered:
Set GL_TEXTURE_MIN_FILTER (the minifying function) and the GL_TEXTURE_MAG_FILTER (the magnifying function) to GL_LINEAR to make sure that the texture looks smooth, even when it's stretched or shrunk.
Set GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T to GL_CLAMP_TO_EDGE so that the texture is never repeated.
Finally, use the texImage2D method to map the Bitmap to the texture. The implementation of the generateSquare method should look like this:
Whenever the dimensions of the GLSurfaceView change, the onSurfaceChanged method of the Renderer is called. Here's where you have to call glViewPort to specify the new dimensions of the viewport. Also, call glClearColor to paint the GLSurfaceView black. Next, call generateSquare to reinitialize the textures and the plane.
@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
GLES20.glViewport(0,0,width, height);
GLES20.glClearColor(0,0,0,1);
generateSquare();
}
Finally, call the Square object's draw method inside the onDrawFrame method of the Renderer.
@Override
public void onDrawFrame(GL10 gl) {
square.draw(textures[0]);
}
You can now run your app and see the photo you had chosen being rendered as an OpenGL texture on a plane.
4. Using the Media Effects Framework
The complex code we wrote until now was just a prerequisite to use the Media Effects framework. It's now time to start using the framework itself. Add the following fields to your Renderer class.
Initialize the effectContext field by using the EffectContext.createWithCurrentGlContext. It's responsible for managing the information about the visual effects inside an OpenGL context. To optimize performance, this should be called only once. Add the following code at the beginning of your onDrawFrame method.
Creating an effect is very simple. Use the effectContext to create an EffectFactory and use the EffectFactory to create an Effect object. Once an Effect object is available, you can call apply and pass a reference to the original texture to it, in our case it is textures[0], along with a reference to a blank texture object, in our case it is textures[1]. After the apply method is called, textures[1] will contain the result of the Effect.
For example, to create and apply the grayscale effect, here's the code you have to write:
Some effects take parameters. For instance, the brightness adjustment effect has a brightness parameter which takes a float value. You can use setParameter to change the value of any parameter. The following code shows you how to use it:
The effect will make your app render the following result:
Conclusion
In this tutorial, you have learned how to use the Media Effects Framework to apply various effects to your photos. While doing so, you also learned how to draw a plane using OpenGL ES 2.0 and apply various textures to it.
The framework can be applied to both photos and videos. In case of videos, you simply have to apply the effect to the individual frames of the video in the onDrawFrame method.
You have already seen three effects in this tutorial and the framework has dozens more for you to experiment with. To know more about them, refer to the Android Developer's website.
In the first part of this series, we explored the basics of the Sprite Kit framework and implemented the game's start screen. In this tutorial, we will implement the game's main classes.
Swift is an object-oriented language and we will take advantage of this
by separating all of the game's entities into their own classes. We'll start by implementing the Invader class.
1. Implementing the Invader Class
Step 1: Create the Invader Class
Select New> File... from Xcode's File menu, choose Cocoa Touch Class from the iOS > Source section, and click Next. Name the class Invader and make sure it inherits from SKSpriteNode. Make sure that Language is set to Swift. Enter the following code into Invader.swift.
The Invader class is a subclass of the SKSpriteNodeclass. It has two properties, invaderRow and invaderColumn. The invaders are aligned in a grid just like in the original Space Invaders game. The two properties give us an easy way to keep track of which row and column the invader is in.
In the initmethod, we initialize an SKTexture instance. The init(imageNamed:) method takes an image as a parameter. We then invoke the initializer of the superclass, passing in the texture, SKColor.clearColor for the color parameter, and for the size parameter we pass in the texture's size. Finally, we set the name to "invader" so we can identify it later.
The init method is a designated initializer, which means that we need to delegate initialization up to a designated initializer of the Invader's superclass. That's why we invoke the init(texture:color:size:) method.
You may be wondering why the required init(coder:) method is there as well. The SKSpriteNode conforms to the NSCoding protocol. The init(coder:) method is marked as required, which means that every subclass needs to override this method.
We will implement the fireBullet method later in this tutorial.
Step 2: Add Invaders to the Scene
In this step, we will add the invaders to the GameScene. Open GameScene.swift and delete everything inside the didMoveToView(_:) method as well as the body of the for-in loop in the touchesBegan(_:withEvent:) method. The contents of GameScene.swift should now look like this.
import SpriteKit
class GameScene: SKScene {
override func didMoveToView(view: SKView) {
}
override func touchesBegan(touches: NSSet, withEvent event: UIEvent) {
/* Called when a touch begins */
for touch: AnyObject in touches {
}
}
override func update(currentTime: CFTimeInterval) {
/* Called before each frame is rendered */
}
}
We will have one global variable in our project, invaderNum. This variable is used to keep track of the current level of the game. By declaring it as a global variable, we have access to invaderNum across scenes. To declare the variable as a global variable, we declare it outside the GameScene class.
import SpriteKit
var invaderNum = 1
class GameScene: SKScene {
...
Next, add the following properties to the GameScene class.
class GameScene: SKScene {
let rowsOfInvaders = 4
var invaderSpeed = 2
let leftBounds = CGFloat(30)
var rightBounds = CGFloat(0)
var invadersWhoCanFire:[Invader] = []
override func didMoveToView(view: SKView) {
}
The rowsOfInvaders property is how many rows of invaders the game will have and the invaderSpeed property is how fast the invaders will move. The leftBounds and rightBounds properties are used to create a margin on the left and right side of the screen, restricting the invaders' movement in the left and right directions. And finally, the invadersWhoCanFire property is an array that's used to keep track of which invaders can fire a bullet.
Add the setupInvaders method below the update(currentTime:) method in the GameScene class.
func setupInvaders(){
var invaderRow = 0;
var invaderColumn = 0;
let numberOfInvaders = invaderNum * 2 + 1
for var i = 1; i <= rowsOfInvaders; i++ {
invaderRow = i
for var j = 1; j <= numberOfInvaders; j++ {
invaderColumn = j
let tempInvader:Invader = Invader()
let invaderHalfWidth:CGFloat = tempInvader.size.width/2
let xPositionStart:CGFloat = size.width/2 - invaderHalfWidth - (CGFloat(invaderNum) * tempInvader.size.width) + CGFloat(10)
tempInvader.position = CGPoint(x:xPositionStart + ((tempInvader.size.width+CGFloat(10))*(CGFloat(j-1))), y:CGFloat(self.size.height - CGFloat(i) * 46))
tempInvader.invaderRow = invaderRow
tempInvader.invaderColumn = invaderColumn
addChild(tempInvader)
if(i == rowsOfInvaders){
invadersWhoCanFire.append(tempInvader)
}
}
}
}
We have the invaderRow and invaderColumn variables that will be used to set the properties of the same name on the invader. Next, we use a double for loop to lay out the invaders on the screen. There is a lot of numeric type conversion going on, because swift does not implicitly convert numbers to the appropriate type. We must do so ourselves.
We first instantiate a new Invader, tempInvader, and then declare a constant invaderHalfWidth that is half the size of tempInvader.
Next, we calculate the xPositionStart so that the invaders will always be aligned in the middle of the scene. We get half of the scene's width and subtract half of the invader's width since the default registration point is the center (0.5, 0.5) of the sprite. We then must subtract the width of the invader times however much invaderNum is equal to, and add 10 to that value, since there are 10 points of space between the invaders. This may be a little hard to comprehend at first, so take your time to understand it.
We then set the invader's position property, which is a GGPoint. We use a bit more math to make sure each invader has 10 points of space between them and that each row has 46 points of space between them.
We assign the invaderRow and invaderColumn properties, and add the tempInvader to the scene with the addChild(_:) method. If this is the last row of invaders, we put the tempInvader into the invadersWhoCanFire array.
The setupInvaders method is invoked in the didMoveToView(_:) method. In this method, we also set the backgroundColor property to SKColor.blackColor.
If you test the application, you should see 4 rows of 3 invaders. If you set invaderNum to 2, you should see 4 rows of 5 invaders aligned in the middle of the scene.
2. Implementing the Player Class
Step 1: Create the Player Class
Create a new Cocoa Touch Class named Playerthat is a subclass of SKSpriteNode. Add the following implementation to Player.swift.
import UIKit
import SpriteKit
class Player: SKSpriteNode {
override init() {
let texture = SKTexture(imageNamed: "player1")
super.init(texture: texture, color: SKColor.clearColor(), size: texture.size())
animate()
}
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
private func animate(){
var playerTextures:[SKTexture] = []
for i in 1...2 {
playerTextures.append(SKTexture(imageNamed: "player\(i)"))
}
let playerAnimation = SKAction.repeatActionForever( SKAction.animateWithTextures(playerTextures, timePerFrame: 0.1))
self.runAction(playerAnimation)
}
func die (){
}
func kill(){
}
func respawn(){
}
func fireBullet(scene: SKScene){
}
}
The init method should look familiar. The only difference is that we are using a different image for the initial setup. There are two images named player1 and player2 in the images folder, one has the thruster engaged and the other has the thruster off. We will constantly switch between these two images, creating the illusion of a thruster firing on and off. This is what the animate method does.
In the animate method, we have an array playerTextures that will hold the textures for the animation. We add the SKTexture objects to this array by using a for-inloop and a closed range using the closed range operator. We use string interpolation to get the correct image and initialize an SKTexture instance.
We declare a constant, playerAnimation, which invokes the repeatActionForever method of the SKAction class. In that action, we invoke animateWithTextures(_:timePerFrame:). The animateWithTextures(_:timePerFrame:) method takes as parameters an array of textures and the amount of time that each texture is shown. Lastly, we invoke runAction(_:) and pass in the playerAnimation.
The other methods will be implemented later in this tutorial.
Step 2: Adding the Player to the Scene
Declare a constant property named player to the GameScene class.
class GameScene: SKScene {
...
var invadersWhoCanFire:[Invader] = [Invader]()
let player:Player = Player()
Next, add the setupPlayer method below the setupInvaders method.
You should be familiar with the implementation of the setupPlayer method. We set the player's position and add it to the scene. However, we are using a new function, CGRectGetMidX(_:), which returns the center of a rectangle along the x-axis. Here we use the scene's frame.
You can now invoke the setupPlayer method in the didMoveToView(_:) method.
The init method takes two parameters, imageName and bulletSound. The second parameter is optional. The player will play a laser sound each time a bullet is fired. I do not have the invaders doing that in this game, although you certainly could. That's also the reason why the bullet sound is an optional parameter. You could even use a different sound for each one.
The first part should be familiar, although we are now creating the texture with whatever image was passed in as the first argument. This will allow you to use different images for the player and invaders' bullets if you wanted to.
If bulletSound isn't nil, we run an SKAction method playSoundFileNamed(_:waitForCompletion:). This method takes as parameters a String, which is the name of the sound file including the extension, and a Bool, waitForCompletion. The waitForCompletion parameter isn't important to us. If it were set to true, then the action would last for however long the sound file is.
Step 2: Create the InvaderBullet Class
Create a new Cocoa Touch Class named InvaderBullet that is a subclass of the Bullet class.
The implementation of the InvaderBullet class might not make much sense, because we only call the init(imageName:bulletSound:) method of the superclass in the init(imageName:bulletSound:) initializer. It will, however, make much more sense why it is set up this way when we add code for collision detection.
Step 3: Create the PlayerBullet Class
Create a new Cocoa Touch Class named PlayerBullet that is also a subclass of the Bullet class. As you can see, the implementation of the PlayerBullet class is identical to that of the InvaderBullet class.
In this tutorial, we created and implemented some of the key classes of the game. We added a grid of invaders to the scene and the spaceship the player will be controlling. We will continue to work with these classes in the next part of this series in which we implement the gameplay.
In this tutorial, we’re going to explore various ways to integrate our application with the features offered by the Windows Phone platform. We’ll explore launches and choosers, learn how to interact with contacts and appointments, and see how to take advantage of Kid’s Corner, an innovative feature introduced to allow kids to safely use the phone.
Launchers and Choosers
When we discussed storage earlier in this series, we introduced the concept of isolated applications. In the same way that storage is isolated such that you can’t access the data stored by another application, the application itself is isolated from the operating system.
The biggest benefit of this approach is security. Even if a malicious application is able to pass the certification process, it won’t have the chance to do much damage because it doesn’t have direct access to the operating system. But sooner or later, you’ll need to interact with one of the many Windows Phone features, like sending a message, making a phone call, playing a song, etc.
For all these scenarios, the framework has introduced launchers and choosers, which are sets of APIs that demand a specific task from the operating system. Once the task is completed, control is returned to the application.
Launchers are “fire and forget” APIs. You demand the operation and don’t expect anything in return—for example, starting a phone call or playing a video.
Choosers are used to get data from a native application—for example, contacts from the People Hub—and import it into your app.
All the launchers and choosers are available in the Microsoft.Phone.Tasks namespace and share the same behavior:
Every launcher and chooser is represented by a specific class.
If needed, you set some properties that are used to define the launcher or chooser’s settings.
With a chooser, you’ll need to subscribe to the Completed event, which is triggered when the operation is completed.
The Show() method is called to execute the task.
Note: Launchers and choosers can’t be used to override the built-in Windows Phone security mechanism, so you won’t be able to execute operations without explicit permission from the user.
In the following sample, you can see a launcher that sends an email using the EmailComposeTask class:
Every chooser returns a TaskResult property, with the status of the operation. It’s important to verify that the status is TaskResult.OK before moving on, because the user could have canceled the operation.
The following is a list of all the available launchers:
MapsDirectionTask is used to open the native Map application and calculate a path between two places.
MapsTask is used to open the native Map application centered on a specific location.
MapDownloaderTask is used to manage the offline maps support new to Windows Phone 8. With this task, you’ll be able to open the Settings page used to manage the downloaded maps.
MapUpdaterTask is used to redirect the user to the specific Settings page to check for offline maps updates.
ConnectionSettingsTask is used to quickly access the different Settings pages to manage the different available connections, like Wi-Fi, cellular, or Bluetooth.
EmailComposeTask is used to prepare an email and send it.
MarketplaceDetailTask is used to display the detail page of an application on the Windows Phone Store. If you don’t provide the application ID, it will the open the detail page of the current application.
MarketplaceHubTask is used to open the Store to a specific category.
MarketplaceReviewTask is used to open the page in the Windows Phone Store where the user can leave a review for the current application.
MarketplaceSearchTask is used to start a search for a specific keyword in the Store.
MediaPlayerLauncher is used to play audio or a video using the internal Windows Phone player. It can play both files embedded in the Visual Studio project and those saved in the local storage.
PhoneCallTask is used to start a phone call.
ShareLinkTask is used to share a link on a social network using the Windows Phone embedded social features.
ShareStatusTask is used to share custom status text on a social network.
ShareMediaTask is used to share one of the pictures from the Photos Hub on a social network.
SmsComposeTask is used to prepare a text message and send it.
WebBrowserTask is used to open a URI in Internet Explorer for Windows Phone.
SaveAppointmentTask is used to save an appointment in the native Calendar app.
The following is a list of available choosers:
AddressChooserTask is used to import a contact’s address.
CameraCaptureTask is used to take a picture with the integrated camera and import it into the application.
EmailAddressChooserTask is used to import a contact’s email address.
PhoneNumberChooserTask is used to import a contact’s phone number.
PhotoChooserTask is used to import a photo from the Photos Hub.
SaveContactTask is used to save a new contact in the People Hub. The chooser simply returns whether the operation completed successfully.
SaveEmailAddressTask is used to add a new email address to an existing or new contact. The chooser simply returns whether the operation completed successfully.
SavePhoneNumberTask is used to add a new phone number to an existing contact. The chooser simply returns whether the operation completed successfully.
SaveRingtoneTask is used to save a new ringtone (which can be part of the project or stored in the local storage). It returns whether the operation completed successfully.
Getting Contacts and Appointments
Launchers already provide a basic way of interacting with the People Hub, but they always require user interaction. They open the People Hub and the user must choose which contact to import.
However, in certain scenarios you need the ability to programmatically retrieve contacts and appointments for the data. Windows Phone 7.5 introduced some new APIs to satisfy this requirement. You just have to keep in mind that, to respect the Windows Phone security constraints, these APIs only work in read-only mode; you’ll be able to get data, but not save it (later in this article, we’ll see that Windows Phone 8 has introduced a way to override this limitation).
In the following table, you can see which data you can access based on where the contacts are saved.
Provider
Contact Name
Contact Picture
Other Information
Calendar Appointments
Device
Yes
Yes
Yes
Yes
Outlook.com
Yes
Yes
Yes
Yes
Exchange
Yes
Yes
Yes
Yes
SIM
Yes
Yes
Yes
No
Facebook
Yes
Yes
Yes
No
Other social networks
No
No
No
No
To know where the data is coming from, you can use the Accounts property, which is a collection of the accounts where the information is stored. In fact, you can have information for the same data split across different accounts.
Working With Contacts
Each contact is represented by the Contact class, which contains all the information about a contact, like DisplayName, Addresses, EmailAddresses, Birthdays, etc. (basically, all the information that you can edit when you create a new contact in the People Hub).
Note: To access the contacts, you need to enable the ID_CAP_CONTACTS option in the manifest file.
Interaction with contacts starts with the Contacts class which can be used to perform a search by using the SearchAsync() method. The method requires two parameters: the keyword and the filter to apply. There are two ways to start a search:
A generic search: The keyword is not required since you’ll simply get all the contacts that match the selected filter. This type of search can be achieved with two filter types: FilterKind.PinnedToStart which returns only the contacts that the user has pinned on the Start screen, and FilterKind.None which simply returns all the available contacts.
A search for a specific field: In this case, the search keyword will be applied based on the selected filter. The available filters are DisplayName, EmailAddress, and PhoneNumber.
The SearchAsync() method uses a callback approach; when the search is completed, an event called SearchCompleted is raised.
In the following sample, you can see a search that looks for all contacts whose name is John. The collection of returned contacts is presented to the user with a ListBox control.
Tip: If you want to start a search for another field that is not included in the available filters, you’ll need to get the list of all available contacts by using the FilterKind.None option and apply a filter using a LINQ query. The difference is that built-in filters are optimized for better performance, so make sure to use a LINQ approach only if you need to search for a field other than a name, email address, or phone number.
Working With Appointments
Getting data from the calendar works in a very similar way: each appointment is identified by the Appointment class, which has properties like Subject, Status, Location, StartTime, and EndTime.
To interact with the calendar, you’ll have to use the Appointments class that, like the Contacts class, uses a method called SearchAsync() to start a search and an event called SearchCompleted to return the results.
The only two required parameters to perform a search are the start date and the end date. You’ll get in return all the appointments within this time frame. Optionally, you can also set a maximum number of results to return or limit the search to a specific account.
In the following sample, we retrieve all the appointments that occur between the current date and the day before, and we display them using a ListBox control.
Tip: The only way to filter the results is by start date and end date. If you need to apply additional filters, you’ll have to perform LINQ queries on the results returned by the search operation.
A Private Contact Store for Applications
The biggest limitation of the contacts APIs we’ve seen so far is that we’re only able to read data, not write it. There are some situations in which having the ability to add contacts to the People Hub without asking the user’s permission is a requirement, such as a social network app that wants to add your friends to your contacts list, or a synchronization client that needs to store information from a third-party cloud service in your contact book.
Windows Phone 8 has introduced a new class called ContactStore that represents a private contact book for the application. From the user’s point of view, it behaves like a regular contacts source (like Outlook.com, Facebook, or Gmail). The user will be able to see the contacts in the People Hub, mixed with all the other regular contacts.
From a developer point of view, the store belongs to the application; you are free to read and write data, but every contact you create will be part of your private contact book, not the phone’s contact list. This means that if the app is uninstalled, all the contacts will be lost.
The ContactStore class belongs to the Windows.Phone.PersonalInformation namespace and it offers a method called CreateOrOpenAsync(). The method has to be called every time you need to interact with the private contacts book. If it doesn’t exist, it will be created; otherwise, it will simply be opened.
When you create a ContactStore you can set how the operating system should provide access to it:
The first parameter’s type is ContactStoreSystemAccessMode, and it’s used to choose whether the application will only be able to edit contacts that belong to the private store (ReadOnly), or the user will also be able to edit information using the People Hub (ReadWrite).
The second parameter’s type is ContactStoreApplicationAccessMode, and it’s used to choose whether other third-party applications will be able to access all the information about our contacts (ReadOnly) or only the most important ones, like name and picture (LimitedReadOnly).
The following sample shows the code required to create a new private store:
Tip: After you’ve created a private store, you can’t change the permissions you’ve defined, so you’ll always have to call the CreateOrOpenAsync() method with the same parameters.
Creating Contacts
A contact is defined by the StoredContact class, which is a bit different from the Contact class we’ve previously seen. In this case, the only properties that are directly exposed are GivenName and FamilyName. All the other properties can be accessed by calling the GetPropertiesAsync() method of the StoredContact class, which returns a collection of type Dictionary<string, object>.
Every item of the collection is identified by a key (the name of the contact’s property) and an object (the value). To help developers access the properties, all the available keys are stored in an enum object named KnownContactProperties. In the following sample, we use the key KnowContactProperties.Email to store the user’s email address.
Tip: Since the ContactStore is a dictionary, two values cannot have the same key. Before adding a new property to the contact, you’ll have to make sure that it doesn’t exist yet; otherwise, you’ll need to update the existing one.
The StoredContact class also supports a way to store custom information by accessing the extended properties using the GetExtendedPropertiesAsync() method. It works like the standard properties, except that the property key is totally custom. These kind of properties won’t be displayed in the People Hub since Windows Phone doesn’t know how to deal with them, but they can be used by your application.
In the following sample, we add new custom information called MVP Category:
Searching contacts in the private contact book is a little tricky because there’s no direct way to search a contact for a specific field.
Searches are performed using the ContactQueryResult class, which is created by calling the CreateContactQuery() method of the ContactStore object. The only available operations are GetContactsAsync(),which returns all the contacts, and GetContactCountAsync(),which returns the number of available contacts.
You can also define in advance which fields you’re going to work with, but you’ll still have to use the GetPropertiesAsync() method to extract the proper values. Let’s see how it works in the following sample, in which we look for a contact whose email address is info@qmatteoq.com:
You can define which fields you’re interested in by creating a new ContactQueryOptions object and adding it to the DesiredFields collection. Then, you can pass the ContactQueryOptions object as a parameter when you create the ContactQueryResult one. As you can see, defining the fields isn’t enough to get the desired result. We still have to query each contact using the GetPropertiesAsync() method to see if the information value is the one we’re looking for.
The purpose of the ContactQueryOptions class is to prepare the next query operations so they can be executed faster.
Updating and Deleting Contacts
Updating a contact is achieved in the same way as creating new one: after you’ve retrieved the contact you want to edit, you have to change the required information and call the SaveAsync() method again, as in the following sample:
After we’ve retrieved the user whose email address is info@qmatteoq.com, we change it to mail@domain.com, and save it.
Deletion works in a similar way, except that you’ll have to deal with the contact’s ID, which is a unique identifier that is automatically assigned by the store (you can’t set it; you can only read it). Once you’ve retrieved the contact you want to delete, you have to call the DeleteContactAsync() method on the ContactStore object, passing as parameter the contact ID, which is stored in the Id property of the StoredContact class.
In the previous sample, after we’ve retrieved the contact with the email address info@qmatteoq.com, we delete it using its unique identifier.
Dealing With Remote Synchronization
When working with custom contact sources, we usually don’t simply manage local contacts, but data that is synced with a remote service instead. In this scenario, you have to keep track of the remote identifier of the contact, which will be different from the local one since, as previously mentioned, it’s automatically generated and can’t be set.
For this scenario, the StoredContact class offers a property called RemoteId to store such information. Having a RemoteId also simplifies the search operations we’ve seen before. The ContactStore class, in fact, offers a method called FindContactByRemoteIdAsync(), which is able to retrieve a specific contact based on the remote ID as shown in the following sample:
There’s one important requirement to keep in mind: the RemoteId property’s value should be unique across any application installed on the phone that uses a private contact book; otherwise, you’ll get an exception.
In this article published by Microsoft, you can see an implementation of a class called RemoteIdHelper that offers some methods for adding random information to the remote ID (using a GUID) to make sure it’s unique.
Taking Advantage of Kid's Corner
Kid’s Corner is an interesting and innovative feature introduced in Windows Phone 8 that is especially useful for parents of young children. Basically, it’s a sandbox that we can customize. We can decide which apps, games, pictures, videos, and music can be accessed.
As developers, we are able to know when an app is running in Kid’s Corner mode. This way, we can customize the experience to avoid providing inappropriate content, such as sharing features.
Taking advantage of this feature is easy; we simply check the Modes property of the ApplicationProfile class, which belongs to the Windows.Phone.ApplicationModel namespace. When it is set to Default, the application is running normally. If it’s set to Alternate, it’s running in Kid’s Corner mode.
private void OnCheckStatusClicked(object sender, RoutedEventArgs e)
{
if (ApplicationProfile.Modes == ApplicationProfileModes.Default)
{
MessageBox.Show("The app is running in normal mode.");
}
else
{
MessageBox.Show("The app is running in Kid's Corner mode.");
}
}
Speech APIs: Let's Talk With the Application
Speech APIs are one of the most interesting new features added in Windows Phone 8. From a user point of view, vocal features are managed in the Settings page. The Speech section lets users configure all the main settings like the voice type, but most of all, it’s used to set up the language they want to use for the speech services. Typically, it’s set with the same language of the user interface, and users have the option to change it by downloading and installing a new voice pack. It’s important to understand how speech services are configured, because in your application, you’ll be able to use speech recognition only for languages that have been installed by the user.
The purpose of speech services is to add vocal recognition support in your applications in the following ways:
Enable users to speak commands to interact with the application, such as opening it and executing a task.
Enable text-to-speech features so that the application is able to read text to users.
Enable text recognition so that users can enter text by dictating it instead of typing it.
In this section, we’ll examine the basic requirements for implementing all three modes in your application.
Voice Commands
Voice commands are a way to start your application and execute a specific task regardless of what the user is doing. They are activated by tapping and holding the Start button. Windows Phone offers native support for many voice commands, such as starting a phone call, dictating email, searching via Bing, and more.
The user simply has to speak a command; if it’s successfully recognized, the application will be opened and, as developers, we’ll get some information to understand which command has been issued so that we can redirect the user to the proper page or perform a specific operation.
Voice commands are based on VCD files, which are XML files that are included in your project. Using a special syntax, you’ll be able to define all the commands you want to support in your application and how the application should behave when they are used. These files are natively supported by Visual Studio. If you right-click on your project and choose Add new item, you’ll find a template called VoiceCommandDefinition in the Windows Phone section.
The following code sample is what a VCD file looks like:
<?xml version="1.0" encoding="utf-8"?><VoiceCommands xmlns="http://schemas.microsoft.com/voicecommands/1.0"><CommandSet xml:lang="it" Name="NotesCommandSet"><CommandPrefix>My notes</CommandPrefix><Example> Open my notes and add a new note </Example><Command Name="AddNote"><Example> add a new note </Example><ListenFor> [and] add [a] new note </ListenFor><ListenFor> [and] create [a] new note </ListenFor><Feedback> I’m adding a new note... </Feedback><Navigate Target="/AddNote.xaml" /></Command></CommandSet></VoiceCommands>
A VCD file can contain one or more CommandSet nodes, which are identified by a Name and a specific language (the xml:lang attribute). The second attribute is the most important one. Your application will support voice commands only for the languages you’ve included in CommandSet in the VCD file (the voice commands’ language is defined by users in the Settings page). You can have multiple CommandSet nodes to support multiple languages.
Each CommandSet can have a CommandPrefix, which is the text that should be spoken by users to start sending commands to our application. If one is not specified, the name of the application will automatically be used. This property is useful if you want to localize the command or if your application’s title is too complex to pronounce. You can also add an Example tag, which contains the text displayed by the Windows Phone dialog to help users understand what kind of commands they can use.
Then, inside a CommandSet, you can add up to 100 commands identified by the Command tag. Each command has the following characteristics:
A unique name, which is set in the Name attribute.
The Example tag shows users sample text for the current command.
ListenFor contains the text that should be spoken to activate the command. Up to ten ListenFor tags can be specified for a single command to cover variations of the text. You can also add optional words inside square brackets. In the previous sample, the AddNote command can be activated by pronouncing both “add a new note” or “and add new note.”
Feedback is the text spoken by Windows Phone to notify users that it has understood the command and is processing it.
NavigateTarget can be used to customize the navigation flow of the application. If we don’t set it, the application will be opened to the main page by default. Otherwise, as in the previous sample, we can redirect the user to a specific page. Of course, in both cases we’ll receive the information about the spoken command; we’ll see how to deal with them later.
Once we’ve completed the VCD definition, we are ready to use it in our application.
Note: To use speech services, you’ll need to enable the ID_CAP_SPEECH_RECOGNITION option in the manifest file.
Commands are embedded in a Windows Phone application by using a class called VoiceCommandService, which belongs to the Windows.Phone.Speech.VoiceCommands namespace. This static class exposes a method called InstallCommandSetFromFileAsync(), which requires the path of the VCD file we’ve just created.
The file path is expressed using a Uri that should start with the ms-appx:/// prefix. This Uri refers to the Visual Studio project’s structure, starting from the root.
Phrase Lists
A VCD file can also contain a phrase list, as in the following sample:
<?xml version="1.0" encoding="utf-8"?><VoiceCommands xmlns="http://schemas.microsoft.com/voicecommands/1.0"><CommandSet xml:lang="en" Name="NotesCommandSet"><CommandPrefix>My notes</CommandPrefix><Example> Open my notes and add a new note </Example><Command Name="OpenNote"><Example> open the note </Example><ListenFor> open the note {number} </ListenFor><Feedback> I’m opening the note... </Feedback><Navigate /></Command><PhraseList Label="number"><Item> 1 </Item><Item> 2 </Item><Item> 3 </Item></PhraseList></CommandSet></VoiceCommands>
Phrase lists are used to manage parameters that can be added to a phrase using braces. Each PhraseList node is identified by a Label attribute, which is the keyword to include in the braces inside the ListenFor node. In the previous example, users can say the phrase “open the note” followed by any of the numbers specified with the Item tag inside the PhraseList. You can have up to 2,000 items in a single list.
The previous sample is useful for understanding how this feature works, but it’s not very realistic; often the list of parameters is not static, but is dynamically updated during the application execution. Take the previous scenario as an example: in a notes application, the notes list isn’t fixed since users can create an unlimited number of notes.
The APIs offer a way to keep a PhraseList dynamically updated, as demonstrated in the following sample:
First, you have to get a reference to the current command set by using the VoiceCommandService.InstalledCommandSets collection. As the index, you have to use the name of the set that you’ve defined in the VCD file (the Name attribute of the CommandSet tag). Once you have a reference to the set, you can call the UpdatePhraseListAsync() to update a list by passing two parameters:
the name of the PhraseList (set using the Label attribute)
the collection of new items, as an array of strings
It’s important to keep in mind that the UpdatePhraseListAsync() method overrides the current items in the PhraseList, so you will have to add all the available items every time, not just the new ones.
Intercepting the Requested Command
The command invoked by the user is sent to your application with the query string mechanism discussed earlier in this series. When an application is opened by a command, the user is redirected to the page specified in the Navigate node of the VCD file. The following is a sample URI:
The voiceCommandName parameter contains the spoken command, while the reco parameter contains the full text that has been recognized by Windows Phone.
If the command supports a phrase list, you’ll get another parameter with the same name of the PhraseList and the spoken item as a value. The following code is a sample URI based on the previous note sample, where the user can open a specific note by using the OpenNote command:
Using the APIs we saw earlier in this series, it’s easy to extract the needed information from the query string parameters and use them for our purposes, like in the following sample:
protected override void OnNavigatedTo(NavigationEventArgs e)
{
if (NavigationContext.QueryString.ContainsKey("voiceCommandName"))
{
string commandName = NavigationContext.QueryString["voiceCommandName"];
switch (commandName)
{
case "AddNote":
//Create a new note.
break;
case "OpenNote":
if (NavigationContext.QueryString.ContainsKey("number"))
{
int selectedNote = int.Parse(NavigationContext.QueryString["number"]);
//Load the selected note.
}
break;
}
}
}
We use a switch statement to manage the different supported commands that are available in the NavigationContext.QueryString collection. If the user is trying to open a note, we also get the value of the number parameter.
Working With Speech Recognition
In the beginning of this section, we talked about how to recognize commands that are spoken by the user to open the application. Now it’s time to see how to do the same within the app itself, to recognize commands and allow users to dictate text instead of typing it (a useful feature in providing a hands-free experience).
There are two ways to start speech recognition: by providing a user interface, or by working silently in the background.
In the first case, you can provide users a visual dialog similar to the one used by the operating system when holding the Start button. It’s the perfect solution to manage vocal commands because you’ll be able to give both visual and voice feedback to users.
This is achieved by using the SpeechRecognizerUI class, which offers four key properties to customize the visual dialog:
ListenText is the large, bold text that explains to users what the application is expecting.
Example is additional text that is displayed below the ListenText to help users better understand what kind of speech the application is expecting.
ReadoutEnabled is a Boolean property; when it’s set to true, Windows Phone will read the recognized text to users as confirmation.
ShowConfirmation is another Boolean property; when it’s set to true, users will be able to cancel the operation after the recognition process is completed.
The following sample shows how this feature is used to allow users to dictate a note. We ask users for the text of the note and then, if the operation succeeded, we display the recognized text.
private async void OnStartRecordingClicked(object sender, RoutedEventArgs e)
{
SpeechRecognizerUI sr = new SpeechRecognizerUI();
sr.Settings.ListenText = "Start dictating the note";
sr.Settings.ExampleText = "dictate the note";
sr.Settings.ReadoutEnabled = false;
sr.Settings.ShowConfirmation = true;
SpeechRecognitionUIResult result = await sr.RecognizeWithUIAsync();
if (result.ResultStatus == SpeechRecognitionUIStatus.Succeeded)
{
RecordedText.Text = result.RecognitionResult.Text;
}
}
Notice how the recognition process is started by calling the RecognizeWithUIAsync() method, which returns a SpeechRecognitionUIResult object that contains all the information about the operation.
To silently recognize text, less code is needed since fewer options are available than were used for the dialog. We just need to start listening for the text and understand it. We can do this by calling the RecognizeAsync() method of the SpeechRecognizer class. The recognition result will be stored in a SpeechRecognitionResult object, which is the same that was returned in the RecognitionResult property by the RecognizeWithUIAsync() method we used previously.
Using Custom Grammars
The code we’ve seen so far is used to recognize almost any word in the dictionary. For this reason, speech services will work only if the phone is connected to the Internet since the feature uses online Microsoft services to parse the results.
This approach is useful when users talk to the application to dictate text, as in the previous samples with the note application. But if only a few commands need to be managed, having access to all the words in the dictionary is not required. On the contrary, complete access can cause problems because the application may understand words that aren’t connected to any supported command.
For this scenario, the Speech APIs provide a way to use a custom grammar and limit the number of words that are supported in the recognition process. There are three ways to set a custom grammar:
using only the available standard sets
manually adding the list of supported words
storing the words in an external file
Again, the starting point is the SpeechRecognizer class, which offers a property called Grammars.
To load one of the predefined grammars, use the AddGrammarFromPredefinedType() method, which accepts as parameters a string to identify it (you can choose any value) and the type of grammar to use. There are two sets of grammars: the standard SpeechPredefinedGrammar.Dictation, and SpeechPredefinedGrammar.WebSearch, which is optimized for web related tasks.
In the following sample, we recognize speech using the WebSearch grammar:
Even more useful is the ability to allow the recognition process to understand only a few selected words. We can use the AddGrammarFromList() method offered by the Grammars property, which requires the usual identification key followed by a collection of supported words.
In the following sample, we set the SpeechRecognizer class to understand only the words “save” and “cancel”.
private async void OnStartRecordingClicked(object sender, RoutedEventArgs e)
{
SpeechRecognizer recognizer = new SpeechRecognizer();
string[] commands = new[] { "save", "cancel" };
recognizer.Grammars.AddGrammarFromList("customCommands", commands);
SpeechRecognitionResult result = await recognizer.RecognizeAsync();
if (result.Text == "save")
{
//Saving
}
else if (result.Text == "cancel")
{
//Cancelling the operation
}
else
{
MessageBox.Show("Command not recognized");
}
}
If the user says a word that is not included in the custom grammar, the Text property of the SpeechRecognitionResult object will be empty. The biggest benefit of this approach is that it doesn’t require an Internet connection since the grammar is stored locally.
The third and final way to load a grammar is by using another XML definition called Speech Recognition Grammar Specification (SRGS). You can read more about the supported tags in the official documentation by W3C.
The file describes both the supported words and the correct order that should be used. The previous sample shows the supported commands to manage notes in an application, like “Open the note” or “Load a reminder,” while a command like “Reminder open the” is not recognized.
Visual Studio 2012 offers built-in support for these files with a specific template called SRGS Grammar that is available when you right-click your project and choose Add new item.
Once the file is part of your project, you can load it using the AddGrammarFromUri() method of the SpeechRecognizer class that accepts as a parameter the file path expressed as a Uri, exactly as we’ve seen for VCD files. From now on, the recognition process will use the grammar defined in the file instead of the standard one, as shown in the following sample:
private async void OnStartRecordingWithCustomFile(object sender, RoutedEventArgs e)
{
SpeechRecognizer recognizer = new SpeechRecognizer();
recognizer.Grammars.AddGrammarFromUri("CustomGrammar", new Uri("ms-appx:///CustomGrammar.xml"));
SpeechRecognitionResult result = await recognizer.RecognizeAsync();
if (result.Text != string.Empty)
{
RecordedText.Text = result.Text;
}
else
{
MessageBox.Show("Not recognized");
}
}
Using Text-to-Speech (TTS)
Text-to-speech is a technology that is able to read text to users in a synthesized voice. It can be used to create a dialogue with users so they won’t have to watch the screen to interact with the application.
The basic usage of this feature is really simple. The base class to interact with TTS services is SpeechSynthesizer, which offers a method called SpeakTextAsync(). You simply have to pass to the method the text that you want to read, as shown in the following sample:
private async void OnSpeakClicked(object sender, RoutedEventArgs e)
{
SpeechSynthesizer synth = new SpeechSynthesizer();
await synth.SpeakTextAsync("This is a sample text");
}
Moreover, it’s possible to customize how the text is pronounced by using a standard language called Synthesis Markup Language (SSML), which is based on the XML standard. This standard provides a series of XML tags that defines how a word or part of the text should be pronounced. For example, the speed, language, voice gender, and more can be changed.
The following sample is an example of an SSML file:
<?xml version="1.0"?><speak xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xml:lang="en"
version="1.0"><voice age="5">This text is read by a child</voice><break /><prosody rate="x-slow"> This text is read very slowly</prosody></speak>
This code features three sample SSML tags: voice for simulating the voice’s age, break to add a pause, and prosody to set the reading speed using the rate attribute.
There are two ways to use an SSML definition in your application. The first is to create an external file by adding a new XML file in your project. Next, you can load it by passing the file path to the SpeakSsmlFromUriAsync() method of the SpeechSynthesizer class, similar to how we loaded the VCD file.
Another way is to define the text to be read directly in the code by creating a string that contains the SSML tags. In this case, we can use the SpeakSsmlAsync() method which accepts the string to read as a parameter. The following sample shows the same SSML definition we’ve been using, but stored in a string instead of an external file.
private async void OnSpeakClicked(object sender, RoutedEventArgs e)
{
SpeechSynthesizer synth = new SpeechSynthesizer();
StringBuilder textToRead = new StringBuilder();
textToRead.AppendLine("<speak version=\"1.0\"");
textToRead.AppendLine(" xmlns=\"http://www.w3.org/2001/10/synthesis\"");
textToRead.AppendLine(" xml:lang=\"en\">");
textToRead.AppendLine(" <voice age=\"5\">This text is read by a child</voice>");
textToRead.AppendLine("<prosody rate=\"x-slow\"> This text is read very slowly</prosody>");
textToRead.AppendLine("</speak>");
await synth.SpeakSsmlAsync(textToRead.ToString());
}
You can learn more about the SSML definition and available tags in the official documentation provided by W3C.
Data Sharing
Data sharing is a new feature introduced in Windows Phone 8 that can be used to share data between different applications, including third-party ones.
There are two ways to manage data sharing:
File sharing: The application registers an extension such as .log. It will be able to manage any file with the registered extension that is opened by another application (for example, a mail attachment).
Protocol sharing: The application registers a protocol such as log:. Other applications will be able to use it to send plain data like strings or numbers.
In both cases, the user experience is similar:
If no application is available on the device to manage the requested extension or protocol, users will be asked if they want to search the Store for one that can.
If only one application is registered for the requested extension or protocol, it will automatically be opened.
If multiple applications are registered for the same extension or protocol, users will be able to choose which one to use.
Let’s discuss how to support both scenarios in our application.
Note: There are some file types and protocols that are registered by the system, like Office files, pictures, mail protocols, etc. You can’t override them; only Windows Phone is able to manage them. You can see a complete list of the reserved types in the MSDN documentation.
File Sharing
File sharing is supported by adding a new definition in the manifest file that notifies the operating system which extensions the application can manage. As in many other scenarios we’ve previously seen, this modification is not supported by the visual editor, so we’ll need to right-click the manifest file and choose the View code option.
The extension is added in the Extensions section, which should be defined under the Token one:
Every supported file type has its own FileTypeAssociation tag, which is identified by the Name attribute (which should be unique). Inside this node are two nested sections:
Logos is optional and is used to support an icon to visually identify the file type. Three different images are required, each with a different resolution: 33 × 33, 69 × 69, and 176 × 176. The icons are used in various contexts, such as when the file is received as an email attachment.
SupportedFileTypes is required because it contains the extensions that are going to be supported for the current file type. Multiple extensions can be added.
The previous sample is used to manage the .log file extension in our application.
When another application tries to open a file we support, our application is opened using a special URI:
The fileToken parameter is a GUID that univocally identifies the file—we’re going to use it later.
To manage the incoming URI, we need to introduce the UriMapper class we talked about earlier in this series. When we identify this special URI, we’re going to redirect the user to a specific page of the application that is able to interact with the file.
The following sample shows what the UriMapper looks like:
public class UriMapper: UriMapperBase
{
public override Uri MapUri(Uri uri)
{
string tempUri = HttpUtility.UrlDecode(uri.ToString());
if (tempUri.Contains("/FileTypeAssociation"))
{
int fileIdIndex = tempUri.IndexOf("fileToken=") + 10;
string fileId = tempUri.Substring(fileIdIndex);
string incomingFileName =
SharedStorageAccessManager.GetSharedFileName(fileId);
string incomingFileType = System.IO.Path.GetExtension(incomingFileName);
switch (incomingFileType)
{
case ".log":
return new Uri("/LogPage.xaml?fileToken=" + fileId, UriKind.Relative);
default:
return new Uri("/MainPage.xaml", UriKind.Relative);
}
}
return uri;
}
}
If the starting Uri contains the FileTypeAssociation keyword, it means that the application has been opened due to a file sharing request. In this case, we need to identify the opened file’s extension. We extract the fileToken parameter and, by using the GetSharedFileName() of the SharedAccessManager class (which belongs to the Windows.Phone.Storage namespace), we retrieve the original file name.
By reading the name, we’re able to identify the extension and perform the appropriate redirection. In the previous sample, if the extension is .log, we redirect the user to a specific page of the application called LogPage.xaml. It’s important to add to the Uri the fileToken parameter as a query string; we’re going to use it in the page to effectively retrieve the file. Remember to register the UriMapper in the App.xaml.cs file, as explained earlier in this series.
Tip: The previous UriMapper shows a full example that works when the application supports multiple file types. If your application supports just one extension, it’s not needed to retrieve the file name and identify the file type. Since the application can be opened with the special URI only in a file sharing scenario, you can immediately redirect the user to the dedicated page.
Now it’s time to interact with the file we received from the other application. We’ll do this in the page that we’ve created for this purpose (in the previous sample code, it’s the one called LogPage.xaml).
We’ve seen that when another application tries to open a .log file, the user is redirected to the LogPage.xaml page with the fileToken parameter added to the query string. We’re going to use the OnNavigatedTo event to manage this scenario:
Again we use the SharedStorageAccessManager class, this time by invoking the CopySharedFileAsync() method. Its purpose is to copy the file we received to the local storage so that we can work with it.
The required parameters are:
A StorageFolder object, which represents the local storage folder in which to save the file (in the previous sample, we save it in the root).
The name of the file.
The behavior to apply in case a file with the same name already exists (by using one of the values of the NameCollisionOption enumerator).
The GUID that identifies the file, which we get from the fileToken query string parameter.
Once the operation is completed, a new file called file.log will be available in the local storage of the application, and we can start playing with it. For example, we can display its content in the current page.
How to Open a File
So far we’ve seen how to manage an opened file in our application, but we have yet to discuss how to effectively open a file.
The task is easily accomplished by using the LaunchFileAsync() method offered by the Launcher class (which belongs to the Windows.System namespace). It requires a StorageFile object as a parameter, which represents the file you would like to open.
In the following sample, you can see how to open a log file that is included in the Visual Studio project:
Protocol sharing works similarly to file sharing. We’re going to register a new extension in the manifest file, and we’ll deal with the special URI that is used to launch the application.
Let’s start with the manifest. In this case as well, we’ll have to add a new element in the Extensions section that can be accessed by manually editing the file through the View code option.
The most important attribute is Name, which identifies the protocol we’re going to support. The other two attributes are fixed.
An application that supports protocol sharing is opened with the following URI:
/Protocol?encodedLaunchUri=log:ShowLog?LogId=1
The best way to manage it is to use a UriMapper class, as we did for file sharing. The difference is that this time, we’ll look for the encodedLaunchUri parameter. However, the result will be the same: we will redirect the user to the page that is able to manage the incoming information.
public class UriMapper : UriMapperBase
{
public override Uri MapUri(Uri uri)
{
string tempUri = System.Net.HttpUtility.UrlDecode(uri.ToString());
if (tempUri.Contains(“Protocol”))
{
int logIdIndex = tempUri.IndexOf(“LogId=“) + 6;
string logId = tempUri.Substring(logIdIndex);
return new Uri(“/LogPage.xaml?LogId=“ + logId, UriKind.Relative);
}
return uri;
}
}
In this scenario, the operation is simpler. We extract the value of the parameter LogId and pass it to the LogPage.xaml page. Also, we have less work to do in the landing page; we just need to retrieve the parameter’s value using the OnNavigatedTo event, and use it to load the required data, as shown in the following sample:
Similar to file sharing, other applications can interact with ours by using the protocol sharing feature and the Launcher class that belongs to the Windows.System namespace.
The difference is that we need to use the LaunchUriAsync() method, as shown in the following sample:
private async void OnOpenUriClicked(object sender, RoutedEventArgs e)
{
Uri uri = new Uri(“log:ShowLog?LogId=1”);
await Windows.System.Launcher.LaunchUriAsync(uri);
}
Conclusion
In this tutorial, we’ve examined various ways to integrate our application with the features offered by the Windows Phone platform:
We started with the simplest integration available: launchers and choosers, which are used to demand an operation from the operating system and eventually get some data in return.
We looked at how to interact with user contacts and appointments: first with a read-only mode offered by a new set of APIs introduced in Windows Phone 7.5, and then with the private contacts book, which is a contacts store that belongs to the application but can be integrated with the native People Hub.
We briefly talked about how to take advantage of Kid’s Corner, an innovative feature introduced to allow kids to safely use the phone without accessing applications that are not suitable for them.
We learned how to use one of the most powerful new APIs added in Windows Phone 8: Speech APIs, to interact with our application using voice commands.
We introduced data sharing, which is another new feature used to share data between different applications, and we can manage file extensions and protocols.
This tutorial represents a chapter from Windows Phone 8 Succinctly, a free eBook from the team at Syncfusion.
In this tutorial, we’re going to explore various ways to integrate our application with the features offered by the Windows Phone platform. We’ll explore launches and choosers, learn how to interact with contacts and appointments, and see how to take advantage of Kid’s Corner, an innovative feature introduced to allow kids to safely use the phone.
Launchers and Choosers
When we discussed storage earlier in this series, we introduced the concept of isolated applications. In the same way that storage is isolated such that you can’t access the data stored by another application, the application itself is isolated from the operating system.
The biggest benefit of this approach is security. Even if a malicious application is able to pass the certification process, it won’t have the chance to do much damage because it doesn’t have direct access to the operating system. But sooner or later, you’ll need to interact with one of the many Windows Phone features, like sending a message, making a phone call, playing a song, etc.
For all these scenarios, the framework has introduced launchers and choosers, which are sets of APIs that demand a specific task from the operating system. Once the task is completed, control is returned to the application.
Launchers are “fire and forget” APIs. You demand the operation and don’t expect anything in return—for example, starting a phone call or playing a video.
Choosers are used to get data from a native application—for example, contacts from the People Hub—and import it into your app.
All the launchers and choosers are available in the Microsoft.Phone.Tasks namespace and share the same behavior:
Every launcher and chooser is represented by a specific class.
If needed, you set some properties that are used to define the launcher or chooser’s settings.
With a chooser, you’ll need to subscribe to the Completed event, which is triggered when the operation is completed.
The Show() method is called to execute the task.
Note: Launchers and choosers can’t be used to override the built-in Windows Phone security mechanism, so you won’t be able to execute operations without explicit permission from the user.
In the following sample, you can see a launcher that sends an email using the EmailComposeTask class:
Every chooser returns a TaskResult property, with the status of the operation. It’s important to verify that the status is TaskResult.OK before moving on, because the user could have canceled the operation.
The following is a list of all the available launchers:
MapsDirectionTask is used to open the native Map application and calculate a path between two places.
MapsTask is used to open the native Map application centered on a specific location.
MapDownloaderTask is used to manage the offline maps support new to Windows Phone 8. With this task, you’ll be able to open the Settings page used to manage the downloaded maps.
MapUpdaterTask is used to redirect the user to the specific Settings page to check for offline maps updates.
ConnectionSettingsTask is used to quickly access the different Settings pages to manage the different available connections, like Wi-Fi, cellular, or Bluetooth.
EmailComposeTask is used to prepare an email and send it.
MarketplaceDetailTask is used to display the detail page of an application on the Windows Phone Store. If you don’t provide the application ID, it will the open the detail page of the current application.
MarketplaceHubTask is used to open the Store to a specific category.
MarketplaceReviewTask is used to open the page in the Windows Phone Store where the user can leave a review for the current application.
MarketplaceSearchTask is used to start a search for a specific keyword in the Store.
MediaPlayerLauncher is used to play audio or a video using the internal Windows Phone player. It can play both files embedded in the Visual Studio project and those saved in the local storage.
PhoneCallTask is used to start a phone call.
ShareLinkTask is used to share a link on a social network using the Windows Phone embedded social features.
ShareStatusTask is used to share custom status text on a social network.
ShareMediaTask is used to share one of the pictures from the Photos Hub on a social network.
SmsComposeTask is used to prepare a text message and send it.
WebBrowserTask is used to open a URI in Internet Explorer for Windows Phone.
SaveAppointmentTask is used to save an appointment in the native Calendar app.
The following is a list of available choosers:
AddressChooserTask is used to import a contact’s address.
CameraCaptureTask is used to take a picture with the integrated camera and import it into the application.
EmailAddressChooserTask is used to import a contact’s email address.
PhoneNumberChooserTask is used to import a contact’s phone number.
PhotoChooserTask is used to import a photo from the Photos Hub.
SaveContactTask is used to save a new contact in the People Hub. The chooser simply returns whether the operation completed successfully.
SaveEmailAddressTask is used to add a new email address to an existing or new contact. The chooser simply returns whether the operation completed successfully.
SavePhoneNumberTask is used to add a new phone number to an existing contact. The chooser simply returns whether the operation completed successfully.
SaveRingtoneTask is used to save a new ringtone (which can be part of the project or stored in the local storage). It returns whether the operation completed successfully.
Getting Contacts and Appointments
Launchers already provide a basic way of interacting with the People Hub, but they always require user interaction. They open the People Hub and the user must choose which contact to import.
However, in certain scenarios you need the ability to programmatically retrieve contacts and appointments for the data. Windows Phone 7.5 introduced some new APIs to satisfy this requirement. You just have to keep in mind that, to respect the Windows Phone security constraints, these APIs only work in read-only mode; you’ll be able to get data, but not save it (later in this article, we’ll see that Windows Phone 8 has introduced a way to override this limitation).
In the following table, you can see which data you can access based on where the contacts are saved.
Provider
Contact Name
Contact Picture
Other Information
Calendar Appointments
Device
Yes
Yes
Yes
Yes
Outlook.com
Yes
Yes
Yes
Yes
Exchange
Yes
Yes
Yes
Yes
SIM
Yes
Yes
Yes
No
Facebook
Yes
Yes
Yes
No
Other social networks
No
No
No
No
To know where the data is coming from, you can use the Accounts property, which is a collection of the accounts where the information is stored. In fact, you can have information for the same data split across different accounts.
Working With Contacts
Each contact is represented by the Contact class, which contains all the information about a contact, like DisplayName, Addresses, EmailAddresses, Birthdays, etc. (basically, all the information that you can edit when you create a new contact in the People Hub).
Note: To access the contacts, you need to enable the ID_CAP_CONTACTS option in the manifest file.
Interaction with contacts starts with the Contacts class which can be used to perform a search by using the SearchAsync() method. The method requires two parameters: the keyword and the filter to apply. There are two ways to start a search:
A generic search: The keyword is not required since you’ll simply get all the contacts that match the selected filter. This type of search can be achieved with two filter types: FilterKind.PinnedToStart which returns only the contacts that the user has pinned on the Start screen, and FilterKind.None which simply returns all the available contacts.
A search for a specific field: In this case, the search keyword will be applied based on the selected filter. The available filters are DisplayName, EmailAddress, and PhoneNumber.
The SearchAsync() method uses a callback approach; when the search is completed, an event called SearchCompleted is raised.
In the following sample, you can see a search that looks for all contacts whose name is John. The collection of returned contacts is presented to the user with a ListBox control.
Tip: If you want to start a search for another field that is not included in the available filters, you’ll need to get the list of all available contacts by using the FilterKind.None option and apply a filter using a LINQ query. The difference is that built-in filters are optimized for better performance, so make sure to use a LINQ approach only if you need to search for a field other than a name, email address, or phone number.
Working With Appointments
Getting data from the calendar works in a very similar way: each appointment is identified by the Appointment class, which has properties like Subject, Status, Location, StartTime, and EndTime.
To interact with the calendar, you’ll have to use the Appointments class that, like the Contacts class, uses a method called SearchAsync() to start a search and an event called SearchCompleted to return the results.
The only two required parameters to perform a search are the start date and the end date. You’ll get in return all the appointments within this time frame. Optionally, you can also set a maximum number of results to return or limit the search to a specific account.
In the following sample, we retrieve all the appointments that occur between the current date and the day before, and we display them using a ListBox control.
Tip: The only way to filter the results is by start date and end date. If you need to apply additional filters, you’ll have to perform LINQ queries on the results returned by the search operation.
A Private Contact Store for Applications
The biggest limitation of the contacts APIs we’ve seen so far is that we’re only able to read data, not write it. There are some situations in which having the ability to add contacts to the People Hub without asking the user’s permission is a requirement, such as a social network app that wants to add your friends to your contacts list, or a synchronization client that needs to store information from a third-party cloud service in your contact book.
Windows Phone 8 has introduced a new class called ContactStore that represents a private contact book for the application. From the user’s point of view, it behaves like a regular contacts source (like Outlook.com, Facebook, or Gmail). The user will be able to see the contacts in the People Hub, mixed with all the other regular contacts.
From a developer point of view, the store belongs to the application; you are free to read and write data, but every contact you create will be part of your private contact book, not the phone’s contact list. This means that if the app is uninstalled, all the contacts will be lost.
The ContactStore class belongs to the Windows.Phone.PersonalInformation namespace and it offers a method called CreateOrOpenAsync(). The method has to be called every time you need to interact with the private contacts book. If it doesn’t exist, it will be created; otherwise, it will simply be opened.
When you create a ContactStore you can set how the operating system should provide access to it:
The first parameter’s type is ContactStoreSystemAccessMode, and it’s used to choose whether the application will only be able to edit contacts that belong to the private store (ReadOnly), or the user will also be able to edit information using the People Hub (ReadWrite).
The second parameter’s type is ContactStoreApplicationAccessMode, and it’s used to choose whether other third-party applications will be able to access all the information about our contacts (ReadOnly) or only the most important ones, like name and picture (LimitedReadOnly).
The following sample shows the code required to create a new private store:
Tip: After you’ve created a private store, you can’t change the permissions you’ve defined, so you’ll always have to call the CreateOrOpenAsync() method with the same parameters.
Creating Contacts
A contact is defined by the StoredContact class, which is a bit different from the Contact class we’ve previously seen. In this case, the only properties that are directly exposed are GivenName and FamilyName. All the other properties can be accessed by calling the GetPropertiesAsync() method of the StoredContact class, which returns a collection of type Dictionary<string, object>.
Every item of the collection is identified by a key (the name of the contact’s property) and an object (the value). To help developers access the properties, all the available keys are stored in an enum object named KnownContactProperties. In the following sample, we use the key KnowContactProperties.Email to store the user’s email address.
Tip: Since the ContactStore is a dictionary, two values cannot have the same key. Before adding a new property to the contact, you’ll have to make sure that it doesn’t exist yet; otherwise, you’ll need to update the existing one.
The StoredContact class also supports a way to store custom information by accessing the extended properties using the GetExtendedPropertiesAsync() method. It works like the standard properties, except that the property key is totally custom. These kind of properties won’t be displayed in the People Hub since Windows Phone doesn’t know how to deal with them, but they can be used by your application.
In the following sample, we add new custom information called MVP Category:
Searching contacts in the private contact book is a little tricky because there’s no direct way to search a contact for a specific field.
Searches are performed using the ContactQueryResult class, which is created by calling the CreateContactQuery() method of the ContactStore object. The only available operations are GetContactsAsync(),which returns all the contacts, and GetContactCountAsync(),which returns the number of available contacts.
You can also define in advance which fields you’re going to work with, but you’ll still have to use the GetPropertiesAsync() method to extract the proper values. Let’s see how it works in the following sample, in which we look for a contact whose email address is info@qmatteoq.com:
You can define which fields you’re interested in by creating a new ContactQueryOptions object and adding it to the DesiredFields collection. Then, you can pass the ContactQueryOptions object as a parameter when you create the ContactQueryResult one. As you can see, defining the fields isn’t enough to get the desired result. We still have to query each contact using the GetPropertiesAsync() method to see if the information value is the one we’re looking for.
The purpose of the ContactQueryOptions class is to prepare the next query operations so they can be executed faster.
Updating and Deleting Contacts
Updating a contact is achieved in the same way as creating new one: after you’ve retrieved the contact you want to edit, you have to change the required information and call the SaveAsync() method again, as in the following sample:
After we’ve retrieved the user whose email address is info@qmatteoq.com, we change it to mail@domain.com, and save it.
Deletion works in a similar way, except that you’ll have to deal with the contact’s ID, which is a unique identifier that is automatically assigned by the store (you can’t set it; you can only read it). Once you’ve retrieved the contact you want to delete, you have to call the DeleteContactAsync() method on the ContactStore object, passing as parameter the contact ID, which is stored in the Id property of the StoredContact class.
In the previous sample, after we’ve retrieved the contact with the email address info@qmatteoq.com, we delete it using its unique identifier.
Dealing With Remote Synchronization
When working with custom contact sources, we usually don’t simply manage local contacts, but data that is synced with a remote service instead. In this scenario, you have to keep track of the remote identifier of the contact, which will be different from the local one since, as previously mentioned, it’s automatically generated and can’t be set.
For this scenario, the StoredContact class offers a property called RemoteId to store such information. Having a RemoteId also simplifies the search operations we’ve seen before. The ContactStore class, in fact, offers a method called FindContactByRemoteIdAsync(), which is able to retrieve a specific contact based on the remote ID as shown in the following sample:
There’s one important requirement to keep in mind: the RemoteId property’s value should be unique across any application installed on the phone that uses a private contact book; otherwise, you’ll get an exception.
In this article published by Microsoft, you can see an implementation of a class called RemoteIdHelper that offers some methods for adding random information to the remote ID (using a GUID) to make sure it’s unique.
Taking Advantage of Kid's Corner
Kid’s Corner is an interesting and innovative feature introduced in Windows Phone 8 that is especially useful for parents of young children. Basically, it’s a sandbox that we can customize. We can decide which apps, games, pictures, videos, and music can be accessed.
As developers, we are able to know when an app is running in Kid’s Corner mode. This way, we can customize the experience to avoid providing inappropriate content, such as sharing features.
Taking advantage of this feature is easy; we simply check the Modes property of the ApplicationProfile class, which belongs to the Windows.Phone.ApplicationModel namespace. When it is set to Default, the application is running normally. If it’s set to Alternate, it’s running in Kid’s Corner mode.
private void OnCheckStatusClicked(object sender, RoutedEventArgs e)
{
if (ApplicationProfile.Modes == ApplicationProfileModes.Default)
{
MessageBox.Show("The app is running in normal mode.");
}
else
{
MessageBox.Show("The app is running in Kid's Corner mode.");
}
}
Speech APIs: Let's Talk With the Application
Speech APIs are one of the most interesting new features added in Windows Phone 8. From a user point of view, vocal features are managed in the Settings page. The Speech section lets users configure all the main settings like the voice type, but most of all, it’s used to set up the language they want to use for the speech services. Typically, it’s set with the same language of the user interface, and users have the option to change it by downloading and installing a new voice pack. It’s important to understand how speech services are configured, because in your application, you’ll be able to use speech recognition only for languages that have been installed by the user.
The purpose of speech services is to add vocal recognition support in your applications in the following ways:
Enable users to speak commands to interact with the application, such as opening it and executing a task.
Enable text-to-speech features so that the application is able to read text to users.
Enable text recognition so that users can enter text by dictating it instead of typing it.
In this section, we’ll examine the basic requirements for implementing all three modes in your application.
Voice Commands
Voice commands are a way to start your application and execute a specific task regardless of what the user is doing. They are activated by tapping and holding the Start button. Windows Phone offers native support for many voice commands, such as starting a phone call, dictating email, searching via Bing, and more.
The user simply has to speak a command; if it’s successfully recognized, the application will be opened and, as developers, we’ll get some information to understand which command has been issued so that we can redirect the user to the proper page or perform a specific operation.
Voice commands are based on VCD files, which are XML files that are included in your project. Using a special syntax, you’ll be able to define all the commands you want to support in your application and how the application should behave when they are used. These files are natively supported by Visual Studio. If you right-click on your project and choose Add new item, you’ll find a template called VoiceCommandDefinition in the Windows Phone section.
The following code sample is what a VCD file looks like:
<?xml version="1.0" encoding="utf-8"?><VoiceCommands xmlns="http://schemas.microsoft.com/voicecommands/1.0"><CommandSet xml:lang="it" Name="NotesCommandSet"><CommandPrefix>My notes</CommandPrefix><Example> Open my notes and add a new note </Example><Command Name="AddNote"><Example> add a new note </Example><ListenFor> [and] add [a] new note </ListenFor><ListenFor> [and] create [a] new note </ListenFor><Feedback> I’m adding a new note... </Feedback><Navigate Target="/AddNote.xaml" /></Command></CommandSet></VoiceCommands>
A VCD file can contain one or more CommandSet nodes, which are identified by a Name and a specific language (the xml:lang attribute). The second attribute is the most important one. Your application will support voice commands only for the languages you’ve included in CommandSet in the VCD file (the voice commands’ language is defined by users in the Settings page). You can have multiple CommandSet nodes to support multiple languages.
Each CommandSet can have a CommandPrefix, which is the text that should be spoken by users to start sending commands to our application. If one is not specified, the name of the application will automatically be used. This property is useful if you want to localize the command or if your application’s title is too complex to pronounce. You can also add an Example tag, which contains the text displayed by the Windows Phone dialog to help users understand what kind of commands they can use.
Then, inside a CommandSet, you can add up to 100 commands identified by the Command tag. Each command has the following characteristics:
A unique name, which is set in the Name attribute.
The Example tag shows users sample text for the current command.
ListenFor contains the text that should be spoken to activate the command. Up to ten ListenFor tags can be specified for a single command to cover variations of the text. You can also add optional words inside square brackets. In the previous sample, the AddNote command can be activated by pronouncing both “add a new note” or “and add new note.”
Feedback is the text spoken by Windows Phone to notify users that it has understood the command and is processing it.
NavigateTarget can be used to customize the navigation flow of the application. If we don’t set it, the application will be opened to the main page by default. Otherwise, as in the previous sample, we can redirect the user to a specific page. Of course, in both cases we’ll receive the information about the spoken command; we’ll see how to deal with them later.
Once we’ve completed the VCD definition, we are ready to use it in our application.
Note: To use speech services, you’ll need to enable the ID_CAP_SPEECH_RECOGNITION option in the manifest file.
Commands are embedded in a Windows Phone application by using a class called VoiceCommandService, which belongs to the Windows.Phone.Speech.VoiceCommands namespace. This static class exposes a method called InstallCommandSetFromFileAsync(), which requires the path of the VCD file we’ve just created.
The file path is expressed using a Uri that should start with the ms-appx:/// prefix. This Uri refers to the Visual Studio project’s structure, starting from the root.
Phrase Lists
A VCD file can also contain a phrase list, as in the following sample:
<?xml version="1.0" encoding="utf-8"?><VoiceCommands xmlns="http://schemas.microsoft.com/voicecommands/1.0"><CommandSet xml:lang="en" Name="NotesCommandSet"><CommandPrefix>My notes</CommandPrefix><Example> Open my notes and add a new note </Example><Command Name="OpenNote"><Example> open the note </Example><ListenFor> open the note {number} </ListenFor><Feedback> I’m opening the note... </Feedback><Navigate /></Command><PhraseList Label="number"><Item> 1 </Item><Item> 2 </Item><Item> 3 </Item></PhraseList></CommandSet></VoiceCommands>
Phrase lists are used to manage parameters that can be added to a phrase using braces. Each PhraseList node is identified by a Label attribute, which is the keyword to include in the braces inside the ListenFor node. In the previous example, users can say the phrase “open the note” followed by any of the numbers specified with the Item tag inside the PhraseList. You can have up to 2,000 items in a single list.
The previous sample is useful for understanding how this feature works, but it’s not very realistic; often the list of parameters is not static, but is dynamically updated during the application execution. Take the previous scenario as an example: in a notes application, the notes list isn’t fixed since users can create an unlimited number of notes.
The APIs offer a way to keep a PhraseList dynamically updated, as demonstrated in the following sample:
First, you have to get a reference to the current command set by using the VoiceCommandService.InstalledCommandSets collection. As the index, you have to use the name of the set that you’ve defined in the VCD file (the Name attribute of the CommandSet tag). Once you have a reference to the set, you can call the UpdatePhraseListAsync() to update a list by passing two parameters:
the name of the PhraseList (set using the Label attribute)
the collection of new items, as an array of strings
It’s important to keep in mind that the UpdatePhraseListAsync() method overrides the current items in the PhraseList, so you will have to add all the available items every time, not just the new ones.
Intercepting the Requested Command
The command invoked by the user is sent to your application with the query string mechanism discussed earlier in this series. When an application is opened by a command, the user is redirected to the page specified in the Navigate node of the VCD file. The following is a sample URI:
The voiceCommandName parameter contains the spoken command, while the reco parameter contains the full text that has been recognized by Windows Phone.
If the command supports a phrase list, you’ll get another parameter with the same name of the PhraseList and the spoken item as a value. The following code is a sample URI based on the previous note sample, where the user can open a specific note by using the OpenNote command:
Using the APIs we saw earlier in this series, it’s easy to extract the needed information from the query string parameters and use them for our purposes, like in the following sample:
protected override void OnNavigatedTo(NavigationEventArgs e)
{
if (NavigationContext.QueryString.ContainsKey("voiceCommandName"))
{
string commandName = NavigationContext.QueryString["voiceCommandName"];
switch (commandName)
{
case "AddNote":
//Create a new note.
break;
case "OpenNote":
if (NavigationContext.QueryString.ContainsKey("number"))
{
int selectedNote = int.Parse(NavigationContext.QueryString["number"]);
//Load the selected note.
}
break;
}
}
}
We use a switch statement to manage the different supported commands that are available in the NavigationContext.QueryString collection. If the user is trying to open a note, we also get the value of the number parameter.
Working With Speech Recognition
In the beginning of this section, we talked about how to recognize commands that are spoken by the user to open the application. Now it’s time to see how to do the same within the app itself, to recognize commands and allow users to dictate text instead of typing it (a useful feature in providing a hands-free experience).
There are two ways to start speech recognition: by providing a user interface, or by working silently in the background.
In the first case, you can provide users a visual dialog similar to the one used by the operating system when holding the Start button. It’s the perfect solution to manage vocal commands because you’ll be able to give both visual and voice feedback to users.
This is achieved by using the SpeechRecognizerUI class, which offers four key properties to customize the visual dialog:
ListenText is the large, bold text that explains to users what the application is expecting.
Example is additional text that is displayed below the ListenText to help users better understand what kind of speech the application is expecting.
ReadoutEnabled is a Boolean property; when it’s set to true, Windows Phone will read the recognized text to users as confirmation.
ShowConfirmation is another Boolean property; when it’s set to true, users will be able to cancel the operation after the recognition process is completed.
The following sample shows how this feature is used to allow users to dictate a note. We ask users for the text of the note and then, if the operation succeeded, we display the recognized text.
private async void OnStartRecordingClicked(object sender, RoutedEventArgs e)
{
SpeechRecognizerUI sr = new SpeechRecognizerUI();
sr.Settings.ListenText = "Start dictating the note";
sr.Settings.ExampleText = "dictate the note";
sr.Settings.ReadoutEnabled = false;
sr.Settings.ShowConfirmation = true;
SpeechRecognitionUIResult result = await sr.RecognizeWithUIAsync();
if (result.ResultStatus == SpeechRecognitionUIStatus.Succeeded)
{
RecordedText.Text = result.RecognitionResult.Text;
}
}
Notice how the recognition process is started by calling the RecognizeWithUIAsync() method, which returns a SpeechRecognitionUIResult object that contains all the information about the operation.
To silently recognize text, less code is needed since fewer options are available than were used for the dialog. We just need to start listening for the text and understand it. We can do this by calling the RecognizeAsync() method of the SpeechRecognizer class. The recognition result will be stored in a SpeechRecognitionResult object, which is the same that was returned in the RecognitionResult property by the RecognizeWithUIAsync() method we used previously.
Using Custom Grammars
The code we’ve seen so far is used to recognize almost any word in the dictionary. For this reason, speech services will work only if the phone is connected to the Internet since the feature uses online Microsoft services to parse the results.
This approach is useful when users talk to the application to dictate text, as in the previous samples with the note application. But if only a few commands need to be managed, having access to all the words in the dictionary is not required. On the contrary, complete access can cause problems because the application may understand words that aren’t connected to any supported command.
For this scenario, the Speech APIs provide a way to use a custom grammar and limit the number of words that are supported in the recognition process. There are three ways to set a custom grammar:
using only the available standard sets
manually adding the list of supported words
storing the words in an external file
Again, the starting point is the SpeechRecognizer class, which offers a property called Grammars.
To load one of the predefined grammars, use the AddGrammarFromPredefinedType() method, which accepts as parameters a string to identify it (you can choose any value) and the type of grammar to use. There are two sets of grammars: the standard SpeechPredefinedGrammar.Dictation, and SpeechPredefinedGrammar.WebSearch, which is optimized for web related tasks.
In the following sample, we recognize speech using the WebSearch grammar:
Even more useful is the ability to allow the recognition process to understand only a few selected words. We can use the AddGrammarFromList() method offered by the Grammars property, which requires the usual identification key followed by a collection of supported words.
In the following sample, we set the SpeechRecognizer class to understand only the words “save” and “cancel”.
private async void OnStartRecordingClicked(object sender, RoutedEventArgs e)
{
SpeechRecognizer recognizer = new SpeechRecognizer();
string[] commands = new[] { "save", "cancel" };
recognizer.Grammars.AddGrammarFromList("customCommands", commands);
SpeechRecognitionResult result = await recognizer.RecognizeAsync();
if (result.Text == "save")
{
//Saving
}
else if (result.Text == "cancel")
{
//Cancelling the operation
}
else
{
MessageBox.Show("Command not recognized");
}
}
If the user says a word that is not included in the custom grammar, the Text property of the SpeechRecognitionResult object will be empty. The biggest benefit of this approach is that it doesn’t require an Internet connection since the grammar is stored locally.
The third and final way to load a grammar is by using another XML definition called Speech Recognition Grammar Specification (SRGS). You can read more about the supported tags in the official documentation by W3C.
The file describes both the supported words and the correct order that should be used. The previous sample shows the supported commands to manage notes in an application, like “Open the note” or “Load a reminder,” while a command like “Reminder open the” is not recognized.
Visual Studio 2012 offers built-in support for these files with a specific template called SRGS Grammar that is available when you right-click your project and choose Add new item.
Once the file is part of your project, you can load it using the AddGrammarFromUri() method of the SpeechRecognizer class that accepts as a parameter the file path expressed as a Uri, exactly as we’ve seen for VCD files. From now on, the recognition process will use the grammar defined in the file instead of the standard one, as shown in the following sample:
private async void OnStartRecordingWithCustomFile(object sender, RoutedEventArgs e)
{
SpeechRecognizer recognizer = new SpeechRecognizer();
recognizer.Grammars.AddGrammarFromUri("CustomGrammar", new Uri("ms-appx:///CustomGrammar.xml"));
SpeechRecognitionResult result = await recognizer.RecognizeAsync();
if (result.Text != string.Empty)
{
RecordedText.Text = result.Text;
}
else
{
MessageBox.Show("Not recognized");
}
}
Using Text-to-Speech (TTS)
Text-to-speech is a technology that is able to read text to users in a synthesized voice. It can be used to create a dialogue with users so they won’t have to watch the screen to interact with the application.
The basic usage of this feature is really simple. The base class to interact with TTS services is SpeechSynthesizer, which offers a method called SpeakTextAsync(). You simply have to pass to the method the text that you want to read, as shown in the following sample:
private async void OnSpeakClicked(object sender, RoutedEventArgs e)
{
SpeechSynthesizer synth = new SpeechSynthesizer();
await synth.SpeakTextAsync("This is a sample text");
}
Moreover, it’s possible to customize how the text is pronounced by using a standard language called Synthesis Markup Language (SSML), which is based on the XML standard. This standard provides a series of XML tags that defines how a word or part of the text should be pronounced. For example, the speed, language, voice gender, and more can be changed.
The following sample is an example of an SSML file:
<?xml version="1.0"?><speak xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xml:lang="en"
version="1.0"><voice age="5">This text is read by a child</voice><break /><prosody rate="x-slow"> This text is read very slowly</prosody></speak>
This code features three sample SSML tags: voice for simulating the voice’s age, break to add a pause, and prosody to set the reading speed using the rate attribute.
There are two ways to use an SSML definition in your application. The first is to create an external file by adding a new XML file in your project. Next, you can load it by passing the file path to the SpeakSsmlFromUriAsync() method of the SpeechSynthesizer class, similar to how we loaded the VCD file.
Another way is to define the text to be read directly in the code by creating a string that contains the SSML tags. In this case, we can use the SpeakSsmlAsync() method which accepts the string to read as a parameter. The following sample shows the same SSML definition we’ve been using, but stored in a string instead of an external file.
private async void OnSpeakClicked(object sender, RoutedEventArgs e)
{
SpeechSynthesizer synth = new SpeechSynthesizer();
StringBuilder textToRead = new StringBuilder();
textToRead.AppendLine("<speak version=\"1.0\"");
textToRead.AppendLine(" xmlns=\"http://www.w3.org/2001/10/synthesis\"");
textToRead.AppendLine(" xml:lang=\"en\">");
textToRead.AppendLine(" <voice age=\"5\">This text is read by a child</voice>");
textToRead.AppendLine("<prosody rate=\"x-slow\"> This text is read very slowly</prosody>");
textToRead.AppendLine("</speak>");
await synth.SpeakSsmlAsync(textToRead.ToString());
}
You can learn more about the SSML definition and available tags in the official documentation provided by W3C.
Data Sharing
Data sharing is a new feature introduced in Windows Phone 8 that can be used to share data between different applications, including third-party ones.
There are two ways to manage data sharing:
File sharing: The application registers an extension such as .log. It will be able to manage any file with the registered extension that is opened by another application (for example, a mail attachment).
Protocol sharing: The application registers a protocol such as log:. Other applications will be able to use it to send plain data like strings or numbers.
In both cases, the user experience is similar:
If no application is available on the device to manage the requested extension or protocol, users will be asked if they want to search the Store for one that can.
If only one application is registered for the requested extension or protocol, it will automatically be opened.
If multiple applications are registered for the same extension or protocol, users will be able to choose which one to use.
Let’s discuss how to support both scenarios in our application.
Note: There are some file types and protocols that are registered by the system, like Office files, pictures, mail protocols, etc. You can’t override them; only Windows Phone is able to manage them. You can see a complete list of the reserved types in the MSDN documentation.
File Sharing
File sharing is supported by adding a new definition in the manifest file that notifies the operating system which extensions the application can manage. As in many other scenarios we’ve previously seen, this modification is not supported by the visual editor, so we’ll need to right-click the manifest file and choose the View code option.
The extension is added in the Extensions section, which should be defined under the Token one:
Every supported file type has its own FileTypeAssociation tag, which is identified by the Name attribute (which should be unique). Inside this node are two nested sections:
Logos is optional and is used to support an icon to visually identify the file type. Three different images are required, each with a different resolution: 33 × 33, 69 × 69, and 176 × 176. The icons are used in various contexts, such as when the file is received as an email attachment.
SupportedFileTypes is required because it contains the extensions that are going to be supported for the current file type. Multiple extensions can be added.
The previous sample is used to manage the .log file extension in our application.
When another application tries to open a file we support, our application is opened using a special URI:
The fileToken parameter is a GUID that univocally identifies the file—we’re going to use it later.
To manage the incoming URI, we need to introduce the UriMapper class we talked about earlier in this series. When we identify this special URI, we’re going to redirect the user to a specific page of the application that is able to interact with the file.
The following sample shows what the UriMapper looks like:
public class UriMapper: UriMapperBase
{
public override Uri MapUri(Uri uri)
{
string tempUri = HttpUtility.UrlDecode(uri.ToString());
if (tempUri.Contains("/FileTypeAssociation"))
{
int fileIdIndex = tempUri.IndexOf("fileToken=") + 10;
string fileId = tempUri.Substring(fileIdIndex);
string incomingFileName =
SharedStorageAccessManager.GetSharedFileName(fileId);
string incomingFileType = System.IO.Path.GetExtension(incomingFileName);
switch (incomingFileType)
{
case ".log":
return new Uri("/LogPage.xaml?fileToken=" + fileId, UriKind.Relative);
default:
return new Uri("/MainPage.xaml", UriKind.Relative);
}
}
return uri;
}
}
If the starting Uri contains the FileTypeAssociation keyword, it means that the application has been opened due to a file sharing request. In this case, we need to identify the opened file’s extension. We extract the fileToken parameter and, by using the GetSharedFileName() of the SharedAccessManager class (which belongs to the Windows.Phone.Storage namespace), we retrieve the original file name.
By reading the name, we’re able to identify the extension and perform the appropriate redirection. In the previous sample, if the extension is .log, we redirect the user to a specific page of the application called LogPage.xaml. It’s important to add to the Uri the fileToken parameter as a query string; we’re going to use it in the page to effectively retrieve the file. Remember to register the UriMapper in the App.xaml.cs file, as explained earlier in this series.
Tip: The previous UriMapper shows a full example that works when the application supports multiple file types. If your application supports just one extension, it’s not needed to retrieve the file name and identify the file type. Since the application can be opened with the special URI only in a file sharing scenario, you can immediately redirect the user to the dedicated page.
Now it’s time to interact with the file we received from the other application. We’ll do this in the page that we’ve created for this purpose (in the previous sample code, it’s the one called LogPage.xaml).
We’ve seen that when another application tries to open a .log file, the user is redirected to the LogPage.xaml page with the fileToken parameter added to the query string. We’re going to use the OnNavigatedTo event to manage this scenario:
Again we use the SharedStorageAccessManager class, this time by invoking the CopySharedFileAsync() method. Its purpose is to copy the file we received to the local storage so that we can work with it.
The required parameters are:
A StorageFolder object, which represents the local storage folder in which to save the file (in the previous sample, we save it in the root).
The name of the file.
The behavior to apply in case a file with the same name already exists (by using one of the values of the NameCollisionOption enumerator).
The GUID that identifies the file, which we get from the fileToken query string parameter.
Once the operation is completed, a new file called file.log will be available in the local storage of the application, and we can start playing with it. For example, we can display its content in the current page.
How to Open a File
So far we’ve seen how to manage an opened file in our application, but we have yet to discuss how to effectively open a file.
The task is easily accomplished by using the LaunchFileAsync() method offered by the Launcher class (which belongs to the Windows.System namespace). It requires a StorageFile object as a parameter, which represents the file you would like to open.
In the following sample, you can see how to open a log file that is included in the Visual Studio project:
Protocol sharing works similarly to file sharing. We’re going to register a new extension in the manifest file, and we’ll deal with the special URI that is used to launch the application.
Let’s start with the manifest. In this case as well, we’ll have to add a new element in the Extensions section that can be accessed by manually editing the file through the View code option.
The most important attribute is Name, which identifies the protocol we’re going to support. The other two attributes are fixed.
An application that supports protocol sharing is opened with the following URI:
/Protocol?encodedLaunchUri=log:ShowLog?LogId=1
The best way to manage it is to use a UriMapper class, as we did for file sharing. The difference is that this time, we’ll look for the encodedLaunchUri parameter. However, the result will be the same: we will redirect the user to the page that is able to manage the incoming information.
public class UriMapper : UriMapperBase
{
public override Uri MapUri(Uri uri)
{
string tempUri = System.Net.HttpUtility.UrlDecode(uri.ToString());
if (tempUri.Contains(“Protocol”))
{
int logIdIndex = tempUri.IndexOf(“LogId=“) + 6;
string logId = tempUri.Substring(logIdIndex);
return new Uri(“/LogPage.xaml?LogId=“ + logId, UriKind.Relative);
}
return uri;
}
}
In this scenario, the operation is simpler. We extract the value of the parameter LogId and pass it to the LogPage.xaml page. Also, we have less work to do in the landing page; we just need to retrieve the parameter’s value using the OnNavigatedTo event, and use it to load the required data, as shown in the following sample:
Similar to file sharing, other applications can interact with ours by using the protocol sharing feature and the Launcher class that belongs to the Windows.System namespace.
The difference is that we need to use the LaunchUriAsync() method, as shown in the following sample:
private async void OnOpenUriClicked(object sender, RoutedEventArgs e)
{
Uri uri = new Uri(“log:ShowLog?LogId=1”);
await Windows.System.Launcher.LaunchUriAsync(uri);
}
Conclusion
In this tutorial, we’ve examined various ways to integrate our application with the features offered by the Windows Phone platform:
We started with the simplest integration available: launchers and choosers, which are used to demand an operation from the operating system and eventually get some data in return.
We looked at how to interact with user contacts and appointments: first with a read-only mode offered by a new set of APIs introduced in Windows Phone 7.5, and then with the private contacts book, which is a contacts store that belongs to the application but can be integrated with the native People Hub.
We briefly talked about how to take advantage of Kid’s Corner, an innovative feature introduced to allow kids to safely use the phone without accessing applications that are not suitable for them.
We learned how to use one of the most powerful new APIs added in Windows Phone 8: Speech APIs, to interact with our application using voice commands.
We introduced data sharing, which is another new feature used to share data between different applications, and we can manage file extensions and protocols.
This tutorial represents a chapter from Windows Phone 8 Succinctly, a free eBook from the team at Syncfusion.
In the previous tutorial, we explored the fundamentals of WatchKit development. We created a project in Xcode, added a WatchKit application, and created a basic user interface.
The user interface of our WatchKit application currently displays static data. Unless you live in the desert, that's not very useful for a weather application. In this tutorial, we're going to populate the user interface with data and create a few actions.
1. Updating the User interface
Step 1: Replacing WKInterfaceDate
Before we populate the user interface with data, we need to make a small change. In the previous tutorial, we added a WKInterfaceDate instance to the bottom group to display the current time and date. It would be more useful, however, to display the time and date of the data we're displaying. The reason for this change will become clear in a few moments.
Open Interface.storyboard, remove the WKInterfaceDate instance in the bottom group and replace it with a WKInterfaceLabel instance. Set the label's Width attribute to Relative to Container and the label's Alignment to right aligned.
Step 2: Adding Outlets
To update the user interface with dynamic data, we need to create a few outlets in the InterfaceController class. Open the storyboard in the main editor and InterfaceController.swift in the Assistant Editor on the right. Select the top label in the first group and Control-Drag from the label to the InterfaceController class to create an outlet. Name the outlet locationLabel.
Repeat these steps for the other labels, naming them temperatureLabel and dateLabel respectively. This is what the InterfaceController class should look like when you're finished.
import WatchKit
import Foundation
class InterfaceController: WKInterfaceController {
@IBOutlet weak var dateLabel: WKInterfaceLabel!
@IBOutlet weak var locationLabel: WKInterfaceLabel!
@IBOutlet weak var temperatureLabel: WKInterfaceLabel!
override func awakeWithContext(context: AnyObject?) {
super.awakeWithContext(context)
}
override func willActivate() {
// This method is called when watch view controller is about to be visible to user
super.willActivate()
}
override func didDeactivate() {
// This method is called when watch view controller is no longer visible
super.didDeactivate()
}
}
Now may be a good time to take a closer look at the implementation of the InterfaceController class. In the previous tutorial, I mentioned that InterfaceController inherits from WKInterfaceController. At first glance, it may seem as if a WKInterfaceController instance behaves like a UIViewController instance, but we also learned in the previous tutorial that there are a number of key differences.
To help us, Xcode has populated the InterfaceController class with three overridden methods. It's important to understand when each method is invoked and what it can or should be used for.
awakeWithContect(_:)
In the awakeWithContext(_:) method, you set up and initialize the interface controller. You may be wondering how it differs from the init method. The awakeWithContext(_:) method is invoked after the interface controller is initialized. The method accepts one parameter, a context object that allows interface controllers to pass information to one another. This is the recommended approach for passing information across scenes, that is, interface controllers.
willActivate
The willActivate method is similar to the viewWillAppear(_:) method of the UIViewController class. The willActivate method is invoked before the user interface of the interface controller is presented to the user. It's ideal for tweaking the user interface before it's presented to the user.
didDeactivate
The didDeactivate method is the counterpart of the willActivate method and is invoked when the scene of the interface controller has been removed. Any cleanup code goes into this method. This method is similar to the viewDidDisappear(_:) method found in the UIViewController class.
With the above in mind, we can start loading data and updating the user interface of our WatchKit application. Let's start with loading weather data.
2. Loading Weather Data
Best Practices
You might be thinking that the next step involves an API call to a weather service, but that's not the case. If we were building an iOS application, you'd be right. However, we're creating a WatchKit application.
It isn't recommended to make complex API calls to fetch data to populate the user interface of a WatchKit application. Even though Apple doesn't explicitly mention this in the documentation, an Apple engineer did mention this unwritten best practice in Apple's developer forums.
The WatchKit application is part of an iOS application and it's the iOS application that's in charge of fetching data from a remote backend. There are several approaches we can take to do this, background fetching being a good choice. In this tutorial, however, we're not going to focus on that aspect.
Instead, we'll add dummy data to the bundle of the WatchKit extension and load it in the awakeWithContext(_:) method we discussed earlier.
Create a blank file by selecting New > File... from the File menu. Choose Empty from the iOS > Other section and name the file weather.json. Double-check that you're adding the file to the RainDrop WatchKit Extension. Don't overlook this small but important detail. Populate the file with the following data.
Sharing data between the iOS application and the WatchKit application is an important topic. However, this tutorial focuses on getting your first WatchKit application up and running. In a future tutorial, I will focus on sharing data between an iOS and a WatchKit application.
Even though we won't be covering sharing data in this tutorial, it's important to know that the iOS application and the WatchKit extension don't share a sandbox. Both targets have their own sandbox and that's what makes sharing data less trivial than it seems.
To share data between the iOS and the WatchKit application, you need to leverage app groups. But that's a topic for a future tutorial.
Step 1: Adding SwiftyJSON
Swift is a great language, but some tasks are simpler in Objective-C than they are in Swift. Handling JSON, for example, is one such task. To make this task easier, I've chosen to leverage the popular SwiftyJSON library.
Download the repository from GitHub, unzip the archive, and add SwiftyJSON.swift to the RainDrop WatchKit Extension group. This file is located in the Source folder of the archive. Double-check that SwiftyJSON.swift is added the RainDrop WatchKit Extension target.
Step 2: Implementing WeatherData
To make it easier to work with the weather data stored in weather.json, we're going to create a structure named WeatherData. Select New > File... from the File menu, choose Swift File from the iOS > Source section, and name the file WeatherData. Make sure the file is added to the RainDrop WatchKit Extension target.
The implementation of the WeatherData structure is short and simple. The structure defines three constant properties, date, location, and temperature.
import Foundation
struct WeatherData {
let date: NSDate
let location: String
let temperature: Double
}
Because the temperature value of weather.json is in Celcius, we also implement a computed property fahrenheit for easy conversion between Celcius and Fahrenheit.
var fahrentheit: Double {
return temperature * (9 / 5) + 32
}
We also define two helper methods toCelciusString and toFahrenheitString to make formatting temperature values easier. Don't you love Swift's string interpolation?
Like I said, the implementation of the WeatherData structure is short and simple. This is what the implementation should look like.
import Foundation
struct WeatherData {
let date: NSDate
let location: String
let temperature: Double
var fahrentheit: Double {
return temperature * (9 / 5) + 32
}
func toCelciusString() -> String {
return "\(temperature) °C"
}
func toFahrenheitString() -> String {
return "\(fahrentheit) °F"
}
}
Step 3: Loading Data
Before we load the data from weather.json, we need to declare a property for storing the weather data. The property, weatherData, is of type [WeatherData] and will contain the contents of weather.json as instances of the WeatherData structure.
var weatherData: [WeatherData] = []
For ease of use, we also declare a computed property, weather, that gives us access to the first item of the weatherData array. It's the data of this WeatherData instance that we'll display in the interface controller. Can you guess why we need to declare the weather property as an optional?
var weather: WeatherData? {
return weatherData.first
}
We load the data from weather.json in the awakeWithContext(_:) method. To keep the implementation clean, we invoke a helper method named loadWeatherData.
The implementation of loadWeatherData is probably the most daunting code snippet we'll see in this tutorial. Like I said, parsing JSON isn't trivial in Swift. Luckily, SwiftyJSON does most of the heavy lifting for us.
func loadWeatherData() {
let path = NSBundle.mainBundle().pathForResource("weather", ofType: "json")
if let path = path {
let data = NSData(contentsOfFile: path)
if let data = data {
let weatherData = JSON(data: data)
let locations = weatherData["locations"].array
if let locations = locations {
for location in locations {
let timestamp = location["timestamp"].double!
let date = NSDate(timeIntervalSinceReferenceDate: timestamp)
let model = WeatherData(date: date, location: location["location"].string!, temperature: location["temperature"].double!)
self.weatherData.append(model)
}
}
}
}
}
We obtain the path to weather.json and load its contents as an NSData object. We use SwiftyJSON to parse the JSON, passing in the NSData object. We obtain a reference to the array for the key locations and loop over each location.
We normalize the weather data by converting the timestamp to an NSDate instance and initialize a WeatherData object. Finally, we add the WeatherData object to the weatherData array.
I hope you agree that the implementation isn't all that difficult. Because Swift forces us to make a number of checks, the implementation looks more complex than it actually is.
3. Populating the User Interface
With the weather data ready to use, it's time to update the user interface. As I explained earlier, updating the user interface needs to happen in the willActivate method. Let's take a look at the implementation of this method.
override func willActivate() {
// This method is called when watch view controller is about to be visible to user
super.willActivate()
if let weather = self.weather {
locationLabel.setText(weather.location)
// Update Temperature Label
self.updateTemperatureLabel()
// Update Date Label
self.updateDateLabel()
}
}
After invoking the willActivate method of the superclass, we unwrap the value stored in the weather property. To update the location label, we invoke setText, passing in the value stored in the location property of the weather object. To update the temperature and date labels, we invoke two helper methods. I prefer to keep the willActivate method short and concise, and, more importantly, I don't like to repeat myself.
Before we look at these helper methods, we need to know whether the temperature needs to be displayed in Celcius or Fahrenheit. To resolve this issue, declare a property, celcius, of type Bool and set its initial value to true.
var celcius: Bool = true
The implementation of updateTemperatureLabel is easy to understand. We safely unwrap the value stored in weather and update the temperature label based on the value of celcius. As you can see, the two helper methods of the WeatherData structure we created earlier come in handy.
func updateTemperatureLabel() {
if let weather = self.weather {
if self.celcius {
temperatureLabel.setText(weather.toCelciusString())
} else {
temperatureLabel.setText(weather.toFahrenheitString())
}
}
}
The implementation of updateDateLabel isn't difficult either. We initialize an NSDateFormatter instance, set its dateFormat property, and convert the date of the weather object by calling stringFromDate(_:) on the dateFormatter object. This value is used to update the date label.
func updateDateLabel() {
var date: NSDate = NSDate()
// Initialize Date Formatter
let dateFormattter = NSDateFormatter()
// Configure Date Formatter
dateFormattter.dateFormat = "d/MM HH:mm"
if let weather = self.weather {
date = weather.date
}
// Update Date Label
dateLabel.setText(dateFormattter.stringFromDate(date))
}
Build and run the application to see the result. The user interface should now be populated with the data from weather.json.
4. Switching to Fahrenheit
This looks good. But wouldn't it be great if we added support for both Celcius and Fahrenheit? This is easy to do since we've already laid most of the groundwork.
If the user force touches the user interface of a user interface controller, a menu is shown. Of course, this only works if a menu is available. Let's see how this works.
Open Interface.storyboard and add a menu to the Interface Controller in the Document Outline on the left. By default, a menu has one menu item. We need two menu items so add another menu item to the menu.
Note that the menu and its menu items aren't visible in the user interface. This isn't a problem since we can't configure the layout of the menu. What we can change are the text of a menu item and its image. You'll better understand what that means when we present the menu.
Select the top menu item, open the Attributes Inspector, set Title to Celcius, and Image to Accept. Select the bottom menu item and set Title to Fahrenheit and Image to Accept.
Next, open InterfaceController.swift in the Assistant Editor on the right. Control-Drag from the top menu item to InterfaceController.swift and create an action named toCelcius. Repeat this step for the bottom menu item, creating an action named toFahrenheit.
The implementation of these actions is short. In toCelcius, we check if the celcius property is set to false, and, if it is, we set the property to true. In toFahrenheit, we check if the celcius property is set to true, and, if it is, we set the property to false.
If the value of celcius changes, we need to update the user interface. What better way to accomplish this by implementing a property observer on the celcius property. We only need to implement a didSet property observer.
var celcius: Bool = true {
didSet {
if celcius != oldValue {
updateTemperatureLabel()
}
}
}
The only detail worth mentioning is that the user interface is only updated if the value of celcius did change. Updating the user interface is as simple as calling updateTemperatureLabel. Build and run the WatchKit application in the iOS Simulator to test the menu.
It's worth mentioning that the iOS Simulator mimics the responsiveness of a physical device. What does that mean? Remember that the WatchKit extension runs on an iPhone while the WatchKit application runs on an Apple Watch. When the user taps a menu item, the touch event is sent over a Bluetooth connection to the iPhone. The WatchKit extension processes the event and sends any updates back to the Apple Watch. This communication is pretty fast, but it isn't as fast as if both extension and application were to run on the same device. That short delay is mimicked by the iOS Simulator to help developers get an idea of performance.
Conclusion
Once you've wrapped your head around the architecture of a WatchKit application, it becomes much easier to understand the possibilities and limitations of the first generation of WatchKit applications. In this tutorial, we've only covered the essentials of WatchKit development. There is much more to discover and explore. Stay tuned.
In the previous tutorial, we explored the fundamentals of WatchKit development. We created a project in Xcode, added a WatchKit application, and created a basic user interface.
The user interface of our WatchKit application currently displays static data. Unless you live in the desert, that's not very useful for a weather application. In this tutorial, we're going to populate the user interface with data and create a few actions.
1. Updating the User interface
Step 1: Replacing WKInterfaceDate
Before we populate the user interface with data, we need to make a small change. In the previous tutorial, we added a WKInterfaceDate instance to the bottom group to display the current time and date. It would be more useful, however, to display the time and date of the data we're displaying. The reason for this change will become clear in a few moments.
Open Interface.storyboard, remove the WKInterfaceDate instance in the bottom group and replace it with a WKInterfaceLabel instance. Set the label's Width attribute to Relative to Container and the label's Alignment to right aligned.
Step 2: Adding Outlets
To update the user interface with dynamic data, we need to create a few outlets in the InterfaceController class. Open the storyboard in the main editor and InterfaceController.swift in the Assistant Editor on the right. Select the top label in the first group and Control-Drag from the label to the InterfaceController class to create an outlet. Name the outlet locationLabel.
Repeat these steps for the other labels, naming them temperatureLabel and dateLabel respectively. This is what the InterfaceController class should look like when you're finished.
import WatchKit
import Foundation
class InterfaceController: WKInterfaceController {
@IBOutlet weak var dateLabel: WKInterfaceLabel!
@IBOutlet weak var locationLabel: WKInterfaceLabel!
@IBOutlet weak var temperatureLabel: WKInterfaceLabel!
override func awakeWithContext(context: AnyObject?) {
super.awakeWithContext(context)
}
override func willActivate() {
// This method is called when watch view controller is about to be visible to user
super.willActivate()
}
override func didDeactivate() {
// This method is called when watch view controller is no longer visible
super.didDeactivate()
}
}
Now may be a good time to take a closer look at the implementation of the InterfaceController class. In the previous tutorial, I mentioned that InterfaceController inherits from WKInterfaceController. At first glance, it may seem as if a WKInterfaceController instance behaves like a UIViewController instance, but we also learned in the previous tutorial that there are a number of key differences.
To help us, Xcode has populated the InterfaceController class with three overridden methods. It's important to understand when each method is invoked and what it can or should be used for.
awakeWithContect(_:)
In the awakeWithContext(_:) method, you set up and initialize the interface controller. You may be wondering how it differs from the init method. The awakeWithContext(_:) method is invoked after the interface controller is initialized. The method accepts one parameter, a context object that allows interface controllers to pass information to one another. This is the recommended approach for passing information across scenes, that is, interface controllers.
willActivate
The willActivate method is similar to the viewWillAppear(_:) method of the UIViewController class. The willActivate method is invoked before the user interface of the interface controller is presented to the user. It's ideal for tweaking the user interface before it's presented to the user.
didDeactivate
The didDeactivate method is the counterpart of the willActivate method and is invoked when the scene of the interface controller has been removed. Any cleanup code goes into this method. This method is similar to the viewDidDisappear(_:) method found in the UIViewController class.
With the above in mind, we can start loading data and updating the user interface of our WatchKit application. Let's start with loading weather data.
2. Loading Weather Data
Best Practices
You might be thinking that the next step involves an API call to a weather service, but that's not the case. If we were building an iOS application, you'd be right. However, we're creating a WatchKit application.
It isn't recommended to make complex API calls to fetch data to populate the user interface of a WatchKit application. Even though Apple doesn't explicitly mention this in the documentation, an Apple engineer did mention this unwritten best practice in Apple's developer forums.
The WatchKit application is part of an iOS application and it's the iOS application that's in charge of fetching data from a remote backend. There are several approaches we can take to do this, background fetching being a good choice. In this tutorial, however, we're not going to focus on that aspect.
Instead, we'll add dummy data to the bundle of the WatchKit extension and load it in the awakeWithContext(_:) method we discussed earlier.
Create a blank file by selecting New > File... from the File menu. Choose Empty from the iOS > Other section and name the file weather.json. Double-check that you're adding the file to the RainDrop WatchKit Extension. Don't overlook this small but important detail. Populate the file with the following data.
Sharing data between the iOS application and the WatchKit application is an important topic. However, this tutorial focuses on getting your first WatchKit application up and running. In a future tutorial, I will focus on sharing data between an iOS and a WatchKit application.
Even though we won't be covering sharing data in this tutorial, it's important to know that the iOS application and the WatchKit extension don't share a sandbox. Both targets have their own sandbox and that's what makes sharing data less trivial than it seems.
To share data between the iOS and the WatchKit application, you need to leverage app groups. But that's a topic for a future tutorial.
Step 1: Adding SwiftyJSON
Swift is a great language, but some tasks are simpler in Objective-C than they are in Swift. Handling JSON, for example, is one such task. To make this task easier, I've chosen to leverage the popular SwiftyJSON library.
Download the repository from GitHub, unzip the archive, and add SwiftyJSON.swift to the RainDrop WatchKit Extension group. This file is located in the Source folder of the archive. Double-check that SwiftyJSON.swift is added the RainDrop WatchKit Extension target.
Step 2: Implementing WeatherData
To make it easier to work with the weather data stored in weather.json, we're going to create a structure named WeatherData. Select New > File... from the File menu, choose Swift File from the iOS > Source section, and name the file WeatherData. Make sure the file is added to the RainDrop WatchKit Extension target.
The implementation of the WeatherData structure is short and simple. The structure defines three constant properties, date, location, and temperature.
import Foundation
struct WeatherData {
let date: NSDate
let location: String
let temperature: Double
}
Because the temperature value of weather.json is in Celcius, we also implement a computed property fahrenheit for easy conversion between Celcius and Fahrenheit.
var fahrentheit: Double {
return temperature * (9 / 5) + 32
}
We also define two helper methods toCelciusString and toFahrenheitString to make formatting temperature values easier. Don't you love Swift's string interpolation?
Like I said, the implementation of the WeatherData structure is short and simple. This is what the implementation should look like.
import Foundation
struct WeatherData {
let date: NSDate
let location: String
let temperature: Double
var fahrentheit: Double {
return temperature * (9 / 5) + 32
}
func toCelciusString() -> String {
return "\(temperature) °C"
}
func toFahrenheitString() -> String {
return "\(fahrentheit) °F"
}
}
Step 3: Loading Data
Before we load the data from weather.json, we need to declare a property for storing the weather data. The property, weatherData, is of type [WeatherData] and will contain the contents of weather.json as instances of the WeatherData structure.
var weatherData: [WeatherData] = []
For ease of use, we also declare a computed property, weather, that gives us access to the first item of the weatherData array. It's the data of this WeatherData instance that we'll display in the interface controller. Can you guess why we need to declare the weather property as an optional?
var weather: WeatherData? {
return weatherData.first
}
We load the data from weather.json in the awakeWithContext(_:) method. To keep the implementation clean, we invoke a helper method named loadWeatherData.
The implementation of loadWeatherData is probably the most daunting code snippet we'll see in this tutorial. Like I said, parsing JSON isn't trivial in Swift. Luckily, SwiftyJSON does most of the heavy lifting for us.
func loadWeatherData() {
let path = NSBundle.mainBundle().pathForResource("weather", ofType: "json")
if let path = path {
let data = NSData(contentsOfFile: path)
if let data = data {
let weatherData = JSON(data: data)
let locations = weatherData["locations"].array
if let locations = locations {
for location in locations {
let timestamp = location["timestamp"].double!
let date = NSDate(timeIntervalSinceReferenceDate: timestamp)
let model = WeatherData(date: date, location: location["location"].string!, temperature: location["temperature"].double!)
self.weatherData.append(model)
}
}
}
}
}
We obtain the path to weather.json and load its contents as an NSData object. We use SwiftyJSON to parse the JSON, passing in the NSData object. We obtain a reference to the array for the key locations and loop over each location.
We normalize the weather data by converting the timestamp to an NSDate instance and initialize a WeatherData object. Finally, we add the WeatherData object to the weatherData array.
I hope you agree that the implementation isn't all that difficult. Because Swift forces us to make a number of checks, the implementation looks more complex than it actually is.
3. Populating the User Interface
With the weather data ready to use, it's time to update the user interface. As I explained earlier, updating the user interface needs to happen in the willActivate method. Let's take a look at the implementation of this method.
override func willActivate() {
// This method is called when watch view controller is about to be visible to user
super.willActivate()
if let weather = self.weather {
locationLabel.setText(weather.location)
// Update Temperature Label
self.updateTemperatureLabel()
// Update Date Label
self.updateDateLabel()
}
}
After invoking the willActivate method of the superclass, we unwrap the value stored in the weather property. To update the location label, we invoke setText, passing in the value stored in the location property of the weather object. To update the temperature and date labels, we invoke two helper methods. I prefer to keep the willActivate method short and concise, and, more importantly, I don't like to repeat myself.
Before we look at these helper methods, we need to know whether the temperature needs to be displayed in Celcius or Fahrenheit. To resolve this issue, declare a property, celcius, of type Bool and set its initial value to true.
var celcius: Bool = true
The implementation of updateTemperatureLabel is easy to understand. We safely unwrap the value stored in weather and update the temperature label based on the value of celcius. As you can see, the two helper methods of the WeatherData structure we created earlier come in handy.
func updateTemperatureLabel() {
if let weather = self.weather {
if self.celcius {
temperatureLabel.setText(weather.toCelciusString())
} else {
temperatureLabel.setText(weather.toFahrenheitString())
}
}
}
The implementation of updateDateLabel isn't difficult either. We initialize an NSDateFormatter instance, set its dateFormat property, and convert the date of the weather object by calling stringFromDate(_:) on the dateFormatter object. This value is used to update the date label.
func updateDateLabel() {
var date: NSDate = NSDate()
// Initialize Date Formatter
let dateFormattter = NSDateFormatter()
// Configure Date Formatter
dateFormattter.dateFormat = "d/MM HH:mm"
if let weather = self.weather {
date = weather.date
}
// Update Date Label
dateLabel.setText(dateFormattter.stringFromDate(date))
}
Build and run the application to see the result. The user interface should now be populated with the data from weather.json.
4. Switching to Fahrenheit
This looks good. But wouldn't it be great if we added support for both Celcius and Fahrenheit? This is easy to do since we've already laid most of the groundwork.
If the user force touches the user interface of a user interface controller, a menu is shown. Of course, this only works if a menu is available. Let's see how this works.
Open Interface.storyboard and add a menu to the Interface Controller in the Document Outline on the left. By default, a menu has one menu item. We need two menu items so add another menu item to the menu.
Note that the menu and its menu items aren't visible in the user interface. This isn't a problem since we can't configure the layout of the menu. What we can change are the text of a menu item and its image. You'll better understand what that means when we present the menu.
Select the top menu item, open the Attributes Inspector, set Title to Celcius, and Image to Accept. Select the bottom menu item and set Title to Fahrenheit and Image to Accept.
Next, open InterfaceController.swift in the Assistant Editor on the right. Control-Drag from the top menu item to InterfaceController.swift and create an action named toCelcius. Repeat this step for the bottom menu item, creating an action named toFahrenheit.
The implementation of these actions is short. In toCelcius, we check if the celcius property is set to false, and, if it is, we set the property to true. In toFahrenheit, we check if the celcius property is set to true, and, if it is, we set the property to false.
If the value of celcius changes, we need to update the user interface. What better way to accomplish this by implementing a property observer on the celcius property. We only need to implement a didSet property observer.
var celcius: Bool = true {
didSet {
if celcius != oldValue {
updateTemperatureLabel()
}
}
}
The only detail worth mentioning is that the user interface is only updated if the value of celcius did change. Updating the user interface is as simple as calling updateTemperatureLabel. Build and run the WatchKit application in the iOS Simulator to test the menu.
It's worth mentioning that the iOS Simulator mimics the responsiveness of a physical device. What does that mean? Remember that the WatchKit extension runs on an iPhone while the WatchKit application runs on an Apple Watch. When the user taps a menu item, the touch event is sent over a Bluetooth connection to the iPhone. The WatchKit extension processes the event and sends any updates back to the Apple Watch. This communication is pretty fast, but it isn't as fast as if both extension and application were to run on the same device. That short delay is mimicked by the iOS Simulator to help developers get an idea of performance.
Conclusion
Once you've wrapped your head around the architecture of a WatchKit application, it becomes much easier to understand the possibilities and limitations of the first generation of WatchKit applications. In this tutorial, we've only covered the essentials of WatchKit development. There is much more to discover and explore. Stay tuned.
In this quick tip, you'll learn how to integrate the Butter Knife library in your projects to easily instantiate the views in your layout in your application's code.
Introduction
In every Android application, you have to use the findViewById() method for each view in the layout that you want to use in your application's code. But as applications' designs get more complex layouts, the call to this method becomes repetitive and this is where the Butter Knife library comes in.
The Butter Knife library, developer and maintained by Jake Wharton (Square Inc.), has annotations that help developers to instantiate the views from our activity or fragment. It also has annotations to handle events like onClick(), onLongClick(), etc.
In the sample project of this tutorial, you can see a sample application with one activity and one fragment with an implementation using the Butter Knife library and a regular implementation. Let's explore the steps involved to integrate the Butter Knife library.
1. Using the Butter Knife Library
Step 1: Add the Dependency
Add the following dependency to the project's build.gradle file:
compile 'com.jakewharton:butterknife:6.1.0'
Next, synchronize your project with this file by pressing the synchronize button.
Step 2: Use the Annotations
In every activity or fragment, you have to remove, or comment out, every call of the findViewById() method and add the @InjectView annotation before the declaration of the variable, indicating the identifier of the view.
You can now start using the views in your application's code. Butter Knife will handle the instantiation of every single view for you.
That is all you have to do to use the Butter Knife library in an activity or fragment.In the next section, I'll show you how to use the Butter Knife library for using list views.
2. Using the Butter Knife Library with List Views
The ListView class is a special case to implement, because you instantiate the views inside an adapter. To integrate the Butter Knife library in a list view, you first have to create the custom layout for the items in the list view. I'm going to name mine list_view_item and add the following layout:
In this simple layout, we're going to show an image and some text. Next, we need to create the adapter for the list view. Let's name it ListViewAdapter.
public class ListViewAdapter extends BaseAdapter {
LayoutInflater inflater;
public ListViewAdapter(LayoutInflater inflater){
this.inflater = inflater;
}
@Override
public int getCount() {
return 5;
}
@Override
public Object getItem(int position) {
return null;
}
@Override
public long getItemId(int position) {
return 0;
}
@Override
public View getView(int position, View convertView, ViewGroup parent) {
return null;
}
static class ViewHolder{
public ViewHolder(View view){
}
}
}
Inside the adapter class, there's a static class called ViewHolder to keep it in order. We're going to use this class to contain the views. Let's implement the ViewHolder class as follows:
static class ViewHolder{
@InjectView(R.id.image_in_item)
ImageView image;
@InjectView(R.id.textview_in_item)
TextView text;
public ViewHolder(View view){
ButterKnife.inject(this, view);
}
}
All we have to do now is modify the getView() method as follows:
public View getView(int position, View convertView, ViewGroup parent) {
ViewHolder holder;
View view = inflater.inflate(R.layout.list_view_item, parent, false);
holder = new ViewHolder(view);
Picasso.with(inflater.getContext())
.load("http://lorempixel.com/200/200/sports/" + (position+1))
.into(holder.image);
holder.text.setText("This is a text for the image number: "+position);
return view;
}
In this method, I'm inflating the custom layout inside the view variable and use it to create an object of the ViewHolder class. Note that we're using the Picasso class to load remote images and populate the text view with some text. You may find the Picasso tutorial useful if you want to get more familiar with this library.
Don't forget to add the android.permission.Internet permission in the Android manifest. If you don't, Picasso won't be able to connect to the web and load the remote images.
Finally, all you have to do is to instantiate the list view and attach the adapter. I'm going to do this inside a new activity, ListViewActivity, as shown below. You can see an example of this implementation in the source files of this tutorial.
You can use Butter Knife's annotations for events too. Choose the annotation you want to use, according to the event you want to respond, and put it before the method you want to execute when the event happens.
@OnClick(R.id.sample_textview)
public void showToastMessage(){
Toast.makeText(MainActivity.this, "This is a message from the activity", Toast.LENGTH_SHORT).show();
}
Conclusion
You can use Butter Knife inject() method anywhere you would otherwise use the findViewById() method to save time and avoid code repetition when you have to instantiate the views in the layout. Feel free to share this quick-tip if you found it helpful.
In this quick tip, you'll learn how to integrate the Butter Knife library in your projects to easily instantiate the views in your layout in your application's code.
Introduction
In every Android application, you have to use the findViewById() method for each view in the layout that you want to use in your application's code. But as applications' designs get more complex layouts, the call to this method becomes repetitive and this is where the Butter Knife library comes in.
The Butter Knife library, developer and maintained by Jake Wharton (Square Inc.), has annotations that help developers to instantiate the views from our activity or fragment. It also has annotations to handle events like onClick(), onLongClick(), etc.
In the sample project of this tutorial, you can see a sample application with one activity and one fragment with an implementation using the Butter Knife library and a regular implementation. Let's explore the steps involved to integrate the Butter Knife library.
1. Using the Butter Knife Library
Step 1: Add the Dependency
Add the following dependency to the project's build.gradle file:
compile 'com.jakewharton:butterknife:6.1.0'
Next, synchronize your project with this file by pressing the synchronize button.
Step 2: Use the Annotations
In every activity or fragment, you have to remove, or comment out, every call of the findViewById() method and add the @InjectView annotation before the declaration of the variable, indicating the identifier of the view.
You can now start using the views in your application's code. Butter Knife will handle the instantiation of every single view for you.
That is all you have to do to use the Butter Knife library in an activity or fragment.In the next section, I'll show you how to use the Butter Knife library for using list views.
2. Using the Butter Knife Library with List Views
The ListView class is a special case to implement, because you instantiate the views inside an adapter. To integrate the Butter Knife library in a list view, you first have to create the custom layout for the items in the list view. I'm going to name mine list_view_item and add the following layout:
In this simple layout, we're going to show an image and some text. Next, we need to create the adapter for the list view. Let's name it ListViewAdapter.
public class ListViewAdapter extends BaseAdapter {
LayoutInflater inflater;
public ListViewAdapter(LayoutInflater inflater){
this.inflater = inflater;
}
@Override
public int getCount() {
return 5;
}
@Override
public Object getItem(int position) {
return null;
}
@Override
public long getItemId(int position) {
return 0;
}
@Override
public View getView(int position, View convertView, ViewGroup parent) {
return null;
}
static class ViewHolder{
public ViewHolder(View view){
}
}
}
Inside the adapter class, there's a static class called ViewHolder to keep it in order. We're going to use this class to contain the views. Let's implement the ViewHolder class as follows:
static class ViewHolder{
@InjectView(R.id.image_in_item)
ImageView image;
@InjectView(R.id.textview_in_item)
TextView text;
public ViewHolder(View view){
ButterKnife.inject(this, view);
}
}
All we have to do now is modify the getView() method as follows:
public View getView(int position, View convertView, ViewGroup parent) {
ViewHolder holder;
View view = inflater.inflate(R.layout.list_view_item, parent, false);
holder = new ViewHolder(view);
Picasso.with(inflater.getContext())
.load("http://lorempixel.com/200/200/sports/" + (position+1))
.into(holder.image);
holder.text.setText("This is a text for the image number: "+position);
return view;
}
In this method, I'm inflating the custom layout inside the view variable and use it to create an object of the ViewHolder class. Note that we're using the Picasso class to load remote images and populate the text view with some text. You may find the Picasso tutorial useful if you want to get more familiar with this library.
Don't forget to add the android.permission.Internet permission in the Android manifest. If you don't, Picasso won't be able to connect to the web and load the remote images.
Finally, all you have to do is to instantiate the list view and attach the adapter. I'm going to do this inside a new activity, ListViewActivity, as shown below. You can see an example of this implementation in the source files of this tutorial.
You can use Butter Knife's annotations for events too. Choose the annotation you want to use, according to the event you want to respond, and put it before the method you want to execute when the event happens.
@OnClick(R.id.sample_textview)
public void showToastMessage(){
Toast.makeText(MainActivity.this, "This is a message from the activity", Toast.LENGTH_SHORT).show();
}
Conclusion
You can use Butter Knife inject() method anywhere you would otherwise use the findViewById() method to save time and avoid code repetition when you have to instantiate the views in the layout. Feel free to share this quick-tip if you found it helpful.
In the previous part of this series, we implemented the stubs for the game's main classes. In this tutorial, we will get the invaders moving, bullets firing for both the invaders and player, and implement collision detection. Let's get started.
1. Moving the Invaders
We will use the scene's update method to move the invaders. Whenever you want to move something manually, the update method is generally where you'd want to do this.
Before we do this though, we need to update the rightBounds property. It was initially set to 0, because we need to use the scene's size to set the variable. We were unable to do that outside any of the class's methods so we will update this property in the didMoveToView(_:) method.
Next, implement the moveInvaders method below the setupPlayer method you created in the previous tutorial.
func moveInvaders(){
var changeDirection = false
enumerateChildNodesWithName("invader") { node, stop in
let invader = node as SKSpriteNode
let invaderHalfWidth = invader.size.width/2
invader.position.x -= CGFloat(self.invaderSpeed)
if(invader.position.x > self.rightBounds - invaderHalfWidth || invader.position.x < self.leftBounds + invaderHalfWidth){
changeDirection = true
}
}
if(changeDirection == true){
self.invaderSpeed *= -1
self.enumerateChildNodesWithName("invader") { node, stop in
let invader = node as SKSpriteNode
invader.position.y -= CGFloat(46)
}
changeDirection = false
}
}
We declare a variable, changeDirection, to keep track when the invaders need to change direction, moving left or moving right. We then use the enumerateChildNodesWithName(usingBlock:) method, which searches a node’s children and calls the closure once for each matching node it finds with the matching name "invader". The closure accepts two parameters, node is the node that matches the name and stop is a pointer to a boolean variable to terminate the enumeration. We will not be using stop here, but it is good to know what it is used for.
We cast node to an SKSpriteNode instance which invader is a subclass of, get half its width invaderHalfWidth, and update its position. We then check if its position is within the bounds, leftBounds and rightBounds, and, if not, we set changeDirection to true.
If changeDirection is true, we negate invaderSpeed, which will change the direction the invader moves in. We then enumerate through the invaders and update their y position. Lastly, we set changeDirection back to false.
The moveInvaders method is called in the update(_:) method.
If you test the application now, you should see the invaders move left, right, and then down if they reach the bounds that we have set on either side.
2. Firing Invader Bullets
Step 1: fireBullet
Every so often we want one of the invaders to fire a bullet. As it stands now, the invaders in the bottom row are set up to fire a bullet, because they are in the invadersWhoCanFire array.
When an Invader gets hit by a player bullet, then the invader one row up and in the same column will be added to the invadersWhoCanFire array, while the invader that got hit will be removed. This way only the bottommost invader of every column can fire bullets.
Add the fireBullet method to the InvaderBullet class in InvaderBullet.swift.
In the fireBullet method, we instantiate an InvaderBullet instance, passing in "laser" for imageName, and because we don't want a sound to play we pass in nil for bulletSound. We set its position to be the same as the invader's, with a slight offset on the y position, and add it to the scene.
We create two SKAction instances, moveBulletAction and removeBulletAction. The moveBulletAction action moves the bullet to a certain point over a certain duration while the removeBulletAction action removes it from the scene. By invoking the sequence(_:) method on these actions, they will run sequentially. This is why I mentioned the waitForDuration method when playing a sound in the previous part of this series. If you create an SKAction object by invoking playSoundFileNamed(_:waitForCompletion:) and set waitForCompletion to true, then the duration of that action would be for as long as the sound plays, otherwise it would skip immediately to the next action in the sequence.
Step 2: invokeInvaderFire
Add the invokeInvaderFire method below the other methods you've created in GameScence.swift.
func invokeInvaderFire(){
let fireBullet = SKAction.runBlock(){
self.fireInvaderBullet()
}
let waitToFireInvaderBullet = SKAction.waitForDuration(1.5)
let invaderFire = SKAction.sequence([fireBullet,waitToFireInvaderBullet])
let repeatForeverAction = SKAction.repeatActionForever(invaderFire)
runAction(repeatForeverAction)
}
The runBlock(_:) method of the SKAction class creates an SKAction instance and immediately invokes the closure passed to the runBlock(_:) method. In the closure, we invoke the fireInvaderBullet method. Because we invoke this method in a closure, we have to use self to call it.
We then create an SKAction instance named waitToFireInvaderBullet by invoking waitForDuration(_:), passing in the number of seconds to wait before moving on. Next, we create an SKAction instance, invaderFire, by invoking the sequence(_:) method. This method accepts a collection of action that are invoked by the invaderFire action. We want this sequence to repeat forever so we create an action named repeatForeverAction, pass in the SKAction objects to repeat, and invoke runAction, passing in the repeatForeverAction action. The runAction method is declared in the SKNode class.
Step 3: fireInvaderBullet
Add the fireInvaderBullet method below the invokeInvaderFire method you entered in the previous step.
func fireInvaderBullet(){
let randomInvader = invadersWhoCanFire.randomElement()
randomInvader.fireBullet(self)
}
In this method, we call what seems to be a method named randomElement that would return a random element out of the invadersWhoCanFire array, and then call its fireBullet method. There is, unfortunately, no built in randomElement method on the Array structure. However, we can create an Array extension to provide this functionality.
Step 4: Implement randomElement
Go to File > New> File... and choose Swift File. We are doing something different than before so just make sure you are choosing Swift File and not Cocoa Touch Class. Press Next and name the file Utilities. Add the following to Utilities.swift.
import Foundation
extension Array {
func randomElement() -> T {
let index = Int(arc4random_uniform(UInt32(self.count)))
return self[index]
}
}
We extend the Array structure to have a method named randomElement. The arc4random_uniform function returns a number between 0 and whatever you pass in. Because Swift doesn't implicitly convert numeric types, we must do the conversion ourselves. Finally, we return the element of the array at index index.
This example illustrates how easy it is to add functionality to the structure and classes. You can read more about creating extensions in the The Swift Programming Language.
Step 5: Firing the Bullet
With all this out of the way, we can now fire the bullets. Add the following to the didMoveToView(_:) method.
If you test the application now, every second or so you should see one of the invaders from the bottom row fire a bullet.
3. Firing Player Bullets
Step 1: fireBullet(scene:)
Add the following property to the Player class in Player.swift.
class Player: SKSpriteNode {
private var canFire = true
We want to limit how often the player can fire a bullet. The canFire property will be used to regulate that. Next, add the following to the fireBullet(scene:) method in the Player class.
We first make sure the player is able to fire by checking if canFire is set to true. If it isn't, we immediately return from the method.
If the player can fire, we set canFire to false so they cannot immediately fire another bullet. We then instantiate a PlayerBullet instance, passing in "laser" for the imageNamed parameter. Because we want a sound to play when the player fires a bullet, we pass in "laser.mp3" for the bulletSound parameter.
We then set the bullet's position and add it to the screen. The next few lines are the same as the Invader'sfireBullet method in that we move the bullet and remove it from the scene. Next, we create an SKAction instance, waitToEnableFire, by invoking the waitForDuration(_:) class method. Lastly, we invoke runAction, passing in waitToEnableFire, and on completion set canFire back to true.
Step 2: Firing the Player Bullet
Whenever the user touches the screen, we want to fire a bullet. This is as simple as calling fireBullet on the player object in the touchesBegan(_:withEvent:) method of the GameScene class.
override func touchesBegan(touches: NSSet, withEvent event: UIEvent) {
/* Called when a touch begins */
for touch: AnyObject in touches {
player.fireBullet(self)
}
}
If you test the application now, you should be able to fire a bullet when you tap the screen. Also, you should hear the laser sound every time a bullet is fired.
4. Collision Categories
To detect when nodes are colliding or making contact with each other, we will use Sprite Kit's built-in physics engine. However, the default behavior of the physics engine is that everything collides with everything when they have a physics body added to them. We need a way to separate what we want interacting with each other and we can do this by creating categories to which specific physic bodies belong.
You define these categories using a bit mask that uses a 32-bit integer with 32 individual flags that can be either on or off. This also means you can only have a maximum of 32 categories for your game. This should not present a problem for most games, but it is something to keep in mind.
Add the following structure definition to the GameScene class, below the invaderNum declaration in GameScene.swift.
struct CollisionCategories{
static let Invader : UInt32 = 0x1 << 0
static let Player: UInt32 = 0x1 << 1
static let InvaderBullet: UInt32 = 0x1 << 2
static let PlayerBullet: UInt32 = 0x1 << 3
}
We use a structure, CollsionCategories, to create categories for the Invader, Player, InvaderBullet, and PlayerBullet classes. We are using bit shifting to turn the bits on.
5.Player and InvaderBullet Collision
Step 1: Setting Up InvaderBullet for Collision
Add the following code block to the init(imageName:bulletSound:) method in InvaderBullet.swift.
There are several ways to create a physics body. In this example, we use the init(texture:size:) initializer, which will make the collision detection use the shape of the texture we pass in. There are several other initializers available, which you can see in the SKPhysicsBody class reference.
We could easily have used the init(rectangleOfSize:) initializer, because the bullets are rectangular in shape. In a game this small it does not matter. However, be aware that using the init(texture:size:) method can be computationally expensive since it has to calculate the exact shape of the texture. If you have objects that are rectangular or circular in shape, then you should use those types of initializers if the game's performance is becoming a problem.
For collision detection to work, at least one of the bodies you are testing has to be marked as dynamic. By setting the usesPreciseCollisionDetection property to true, Sprite Kit uses a more precise collision detection. Set this property to true on small, fast moving bodies like our bullets.
Each body will belong to a category and you define this by setting its categoryBitMask. Since this is the InvaderBullet class, we set it to CollisionCategories.InvaderBullet.
To tell when this body has made contact with another body that you are interested in, you set the contactBitMask. Here we want to know when the InvaderBullet has made contact with the player so we use CollisionCategories.Player. Because a collision shouldn't trigger any physics forces, we set collisionBitMask to 0x0.
Step 2: Setting Up Player for Collsion
Add the following to the init method in Player.swift.
Much of this should be familiar from the previous step so I will not rehash it here. There are two differences to notice though. One is that usesPreciseCollsionDetection has been set to false, which is the default. It is important to realize that only one of the contacting bodies needs this property set to true (which was the bullet). The other difference is that we also want to know when the player contacts an invader. You can have more than one contactBitMask category by separating them with the bitwise or (|) operator. Other than that, you should notice it is just basically opposite from the InvaderBullet.
6.Invader and PlayerBullet Collision
Step 1: Setting Up Invader for Collision
Add the following to the init method in Invader.swift.
We have to set up the GameScene class to implement the SKPhysicsContactDelegate so we can respond when two bodies collide. Add the following to make the GameScene class conform to the SKPhysicsContactDelegate protocol.
class GameScene: SKScene, SKPhysicsContactDelegate{
Next, we have to set up some properties on the scene's physicsWorld. Enter the following at the top of the didMoveToView(_:) method in GameScene.swift.
We set the gravity property of physicsWorld to 0 so that none of the physics bodies in the scene are affected by gravity. You can also do this on a per body basis instead of setting the whole world to have no gravity by setting the affectedByGravity property. We also set the contactDelegate property of the physics world to self, the GameScene instance.
To conform the GameScene class to SKPhysicsContactDelegate protocol, we need to implement the didBeginContact(_:) method. This method is called when two bodies make contact. The implementation of the didBeginContact(_:) method looks like this.
func didBeginContact(contact: SKPhysicsContact) {
var firstBody: SKPhysicsBody
var secondBody: SKPhysicsBody
if contact.bodyA.categoryBitMask < contact.bodyB.categoryBitMask {
firstBody = contact.bodyA
secondBody = contact.bodyB
} else {
firstBody = contact.bodyB
secondBody = contact.bodyA
}
if ((firstBody.categoryBitMask & CollisionCategories.Invader != 0) &&
(secondBody.categoryBitMask & CollisionCategories.PlayerBullet != 0)){
NSLog("Invader and Player Bullet Conatact")
}
if ((firstBody.categoryBitMask & CollisionCategories.Player != 0) &&
(secondBody.categoryBitMask & CollisionCategories.InvaderBullet != 0)) {
NSLog("Player and Invader Bullet Contact")
}
if ((firstBody.categoryBitMask & CollisionCategories.Invader != 0) &&
(secondBody.categoryBitMask & CollisionCategories.Player != 0)) {
NSLog("Invader and Player Collision Contact")
}
}
We first declare two variables firstBody and secondBody. When two objects make contact, we don't know which body is which. This means that we first need to make some checks to make sure firstBody is the one with the lower categoryBitMask.
Next, we go through each possible scenario using the bitwise & operator and the collision categories we defined earlier to check what is making contact. We log the result to the console to make sure everything is working as it should. If you test the application, all contacts should be working correctly.
Conclusion
This was a rather long tutorial, but we now have the invaders moving, bullets being fired from both the player and invaders, and contact detection working by using contact bit masks. We are on the home stretch to the final game. In the next and final part of this series, we will have a completed game.
In this tutorial, we're going to focus on how to create a multimedia application for Windows Phone by taking advantage of the device's camera, interacting with the media library, and exploring the possibilities of the Photos Hub.
Using the Camera
The camera is one of the most important features in Windows Phone devices, especially thanks to Nokia, which has created some of the best camera phones available on the market.
As developers, we are able to integrate the camera experience into our application so that users can take pictures and edit them directly within the application. In addition, with the Lens App feature we’ll discuss later, it’s even easier to create applications that can replace the native camera experience.
Note: To interact with the camera, you need to enable the ID_CAP_IS_CAMERA capability in the manifest file.
The first step is to create an area on the page where we can display the image recorded by the camera. We’re going to use VideoBrush, which is one of the native XAML brushes that is able to embed a video. We’ll use it as a background of a Canvas control, as shown in the following sample:
Notice the CompositeTransform that has been applied; its purpose is to keep the correct orientation of the video, based on the camera orientation.
Taking Pictures
Now that we have a place to display the live camera feed, we can use the APIs that are included in the Windows.Phone.Media.Capture namespace. Specifically, the class available to take pictures is called PhotoCaptureDevice (later we’ll see another class for recording videos).
Before initializing the live feed, we need to make two choices: which camera to use, and which of the available resolutions we want to use.
We achieve this by calling the GetAvailableCaptureResolutions() method on the PhotoCaptureDevice class, passing as parameter a CameraSensorLocation object which represents the camera we’re going to use. The method will return a collection of the supported resolutions, which are identified by the Size class.
Tip: It’s safe to use the previous code because every Windows Phone device has a back camera. If we want to interact with the front camera instead, it’s better to check whether one is available first since not all the Windows Phone devices have one. To do this, you can use the AvailableSensorLocation property of the PhotoCaptureDevice class, which is a collection of all the supported cameras.
Once we’ve decided which resolution to use, we can pass it as a parameter (together again with the selected camera) to the OpenAsync() method of the PhotoCaptureDevice class. It will return a PhotoCaptureDevice object which contains the live feed; we simply have to pass it to the SetSource() method of the VideoBrush.
As already mentioned, we handle the camera orientation using the transformation we’ve applied to the VideoBrush: we set the Rotation using the SensorRotationInDegrees property that contains the current angle’s rotation.
Note: You may get an error when you try to pass a PhotoCaptureDevice object as a parameter of the SetSource() method of the VideoBrush. If so, you’ll have to add the Microsoft.Devices namespace to your class, since it contains an extension method for the SetSource() method that supports the PhotoCaptureDevice class.
Now the application will simply display the live feed of the camera on the screen. The next step is to take the picture.
The technique used by the API is to create a sequence of frames and save them as a stream. Unfortunately, there’s a limitation in the current SDK: you’ll only be able to take one picture at a time, so you’ll only be able to use sequences made by one frame.
The process starts with a CameraCaptureSequence object, which represents the capture stream. Due to the single-picutre limitation previously mentioned, you’ll be able to call the CreateCaptureSequence() method of the PhotoCaptureDevice class only by passing 1 as its parameter.
For the same reason, we’re just going to work with the first frame of the sequence that is stored inside the Frames collection. The CaptureStream property of the frame needs to be set with the stream that we’re going to use to store the captured image. In the previous sample, we use a MemoryStream to store the photo in memory. This way, we can save it later in the user’s Photos Hub (specifically, in the Camera Roll album).
Note: To interact with the MediaLibrary class you need to enable the ID_CAP_MEDIALIB_PHOTO capability in the manifest file.
You can also customize many settings of the camera by calling the SetProperty() method on the PhotoCaptureDevice object that requires two parameters: the property to set, and the value to assign. The available properties are defined by two enumerators: KnownCameraGeneralProperties, which contains the general camera properties, and KnownCameraPhotoProperties, which contains the photo-specific properties.
Some properties are read-only, so the only operation you can perform is get their values by using the GetProperty() method.
In the following samples, we use the SetProperty() method to set the flash mode and GetProperty() to get the information if the current region forces phones to play a sound when they take a picture.
Note that the GetProperty() method always returns a generic object, so you’ll have to manually cast it according to the properties you’re querying.
You can see a list of all the available properties in the MSDN documentation.
Using the Hardware Camera Key
Typically, Windows Phone devices have a dedicated button for the camera, which can be used both to set the focus by half-pressing it, and to take the picture by fully pressing it. You are also able to use this button in your applications by subscribing to three events that are exposed by the CameraButtons static class:
ShutterKeyPressed is triggered when the button is pressed.
ShutterKeyReleased is triggered when the button is released.
ShutterKeyHalfPressed is triggered when the button is half-pressed.
In the following sample, we subscribe to the ShutterKeyReleased event to take a picture and the ShutterKeyHalfPressed event to use the auto-focus feature.
The process to record a video is similar to the one we used to take a picture. In this case, we’re going to use the AudioVideoCaptureDevice class instead of the PhotoCaptureDevice class. As you can see in the following sample, the initialization procedure is the same: we decide which resolution and camera we want to use, and we display the returned live feed using a VideoBrush.
Note: To record videos, you’ll also need to enable the ID_CAP_MICROPHONE capability in the manifest file.
Recording a video is even simpler since the AudioVideoCaptureDevice class exposes the StartRecordingToStreamAsync() method, which simply requires you to specify where to save the recorded data. Since it’s a video, you’ll also need a way to stop the recording; this is the purpose of the StopRecordingAsync() method.
In the following sample, the recording is stored in a file created in the local storage:
You can easily test the result of the operation by using the MediaPlayerLauncher class to play the recording:
private void OnPlayVideoClicked(object sender, RoutedEventArgs e)
{
MediaPlayerLauncher launcher = new MediaPlayerLauncher
{
Media = new Uri(file.Path, UriKind.Relative)
};
launcher.Show();
}
The SDK offers a specific list of customizable settings connected to video recording. They are available in the KnownCameraAudioVideoProperties enumerator.
Interacting With the Media Library
The framework offers a class called MediaLibrary, which can be used to interact with the user media library (photos, music, etc.). Let’s see how to use it to manage the most common scenarios.
Note: In the current version, there’s no way to interact with the library to save new videos in the Camera Roll, nor to get access to the stream of existing videos.
Pictures
The MediaLibrary class can be used to get access to the pictures stored in the Photos Hub, thanks to the Pictures collection. It’s a collection of Picture objects, where each one represents a picture stored in the Photos Hub.
Note: You’ll need to enable the ID_CAP_MEDIALIB_PHOTO capability in the manifest file to get access to the pictures stored in the Photos Hub.
The Pictures collection grants access to the following albums:
Camera Roll
Saved Pictures
Screenshots
All other albums displayed in the People Hub that come from remote services like SkyDrive or Facebook can’t be accessed using the MediaLibrary class.
Tip: The MediaLibrary class exposes a collection called SavedPictures, which contains only the pictures that are stored in the Saved Pictures album.
Every Picture object offers some properties to get access to the basic info, like Name, Width, and Height. A very important property is Album, which contains the reference of the album where the image is stored. In addition, you’ll be able to get access to different streams in case you want to manipulate the image or display it in your application:
The GetPicture() method returns the stream of the original image.
The GetThumbnail() method returns the stream of the thumbnail, which is a low-resolution version of the original image.
If you add the PhoneExtensions namespace to your class, you’ll be able to use the GetPreviewImage() method, which returns a preview picture. Its resolution and size are between the original image and the thumbnail.
In the following sample, we generate the thumbnail of the first available picture in the Camera Roll and display it using an Image control:
Tip: To interact with the MediaLibrary class using the emulator, you’ll have to open the Photos Hub at least once; otherwise you will get an empty collection of pictures when you query the Pictures property.
With the MediaLibrary class, you’ll also be able to do the opposite: take a picture in your application and save it in the People Hub. We’ve already seen a sample when we talked about integrating the camera in our application; we can save the picture in the Camera Roll (using the SavePictureToCameraRoll() method) or in the Saved Pictures album (using the SavePicture() method). In both cases, the required parameters are the name of the image and its stream.
In the following sample, we download an image from the Internet and save it in the Saved Pictures album:
The MediaLibrary class offers many options for accessing music, but there are some limitations that aren’t present when working with pictures.
Note: You’ll need to enable the ID_CAP_MEDIALIB_AUDIO capability in the manifest file to get access to the pictures stored in the Photos Hub.
The following collections are exposed by the MediaLibrary class for accessing music:
Albums to get access to music albums.
Songs to get access to all the available songs.
Genres to get access to the songs grouped by genre.
Playlists to get access to playlists.
Every song is identified by the Song class, which contains all the common information about a music track taken directly from the ID3 tag: Album, Artist, Title, TrackNumber, and so on.
Unfortunately, there’s no access to a song’s stream, so the only way to play tracks is by using the MediaPlayer class, which is part of the Microsoft.XNA.Framework.Media namespace. This class exposes many methods to interact with tracks. The Play() method accepts as a parameter a Song object, retrieved from the MediaLibrary.
In the following sample, we reproduce the first song available in the library:
private void OnPlaySong(object sender, RoutedEventArgs e)
{
MediaLibrary library = new MediaLibrary();
Song song = library.Songs.FirstOrDefault();
MediaPlayer.Play(song);
}
One of the new features introduced in Windows Phone 8 allows you to save a song stored in the application’s local storage to the media library so that it can be played by the native Music + Videos Hub. This requires the Microsoft.Xna.Framework.Media.PhoneExtensions namespace to be added to your class.
private async void OnDownloadMusicClicked(object sender, RoutedEventArgs e)
{
MediaLibrary library = new MediaLibrary();
SongMetadata metadata = new SongMetadata
{
AlbumName = "A rush of blood to the head",
ArtistName = "Coldplay",
Name = "Clocks"
};
library.SaveSong(new Uri("song.mp3", UriKind.RelativeOrAbsolute), metadata, SaveSongOperation.CopyToLibrary);
}
The SaveSong() method requires three parameters, as shown in the previous sample:
The path of the song to save. It’s a relative path that points to the local storage.
The song metadata, which is identified by the SongMetadata class. It’s an optional parameter; if you pass null, Windows Phone will automatically extract the ID3 information from the file.
A SaveSongOperation object, which tells the media library if the file should be copied (CopyToLibrary) or moved (MoveToLibrary) so that it’s deleted from the storage.
Lens Apps
Windows Phone 8 has introduced new features specific to photographic applications. Some of the most interesting are called lens apps, which apply different filters and effects to pictures. Windows Phone offers a way to easily switch between different camera applications to apply filters on the fly.
Lens apps are regular Windows Phone applications that interact with the Camera APIs we used at the beginning of this article. The difference is that a lens app is displayed in the lenses section of the native Camera app; when users press the camera button, a special view with all the available lens apps is displayed. This way, they can easily switch to another application to take the picture.
Integration with the lenses view starts from the manifest file, which must be manually edited by choosing the View code option in the context menu. The following code has to be added in the Extension section:
Every lens app needs a specific icon that is displayed in the lenses view. Icons are automatically retrieved from the Assets folder based on a naming convention. An icon must be added for every supported resolution using the conventions in the following table:
Resolution
Icon size
File name
480 × 800
173 × 173
Lens.Screen-WVGA.png
768 × 1280
277 × 277
Lens.Screen-WXGA.png
720 × 1280
259 × 259
Lens.Screen-720p.png
The UriMapper class is required for working with lens apps. In fact, lens apps are opened using a special URI that has to be intercepted and managed. The following code is a sample Uri:
/MainPage.xaml?Action=ViewfinderLaunch
When this Uri is intercepted, users should be redirected to the application page that takes the picture. In the following sample, you can see a UriMapper implementation that redirects users to a page called Camera.xaml when the application is opened from the lens view.
public class MyUriMapper : UriMapperBase
{
public override Uri MapUri(Uri uri)
{
string tempUri = uri.ToString();
if (tempUri.Contains(“ViewfinderLaunch”))
{
return new Uri(“/Camera.xaml”, UriKind.Relative);
}
else
{
return uri;
}
}
}
Support Sharing
If you’ve developed an application that supports photo sharing such as a social network client, you can integrate it in the Share menu of the Photos Hub. Users can find this option in the Application Bar in the photo details page.
When users choose this option, Windows Phone displays a list of applications that are registered to support sharing. We can add our application to the list simply by adding a new extension in the manifest file, as we did to add lens support.
We have to manually add the following declaration in the Extensions section:
Again, we can use a UriMapper implementation to redirect users to our application’s page that offers the sharing feature. It’s also important to carry the FiledId parameter in this page; we’re going to need it to know which photo has been selected by the user.
The following sample shows a UriMapper implementation that simply replaces the name of the original page (MainPage.xaml) with the name of the destination page (SharePage.xaml):
public class MyUriMapper: UriMapperBase
{
public override Uri MapUri(Uri uri)
{
string tempUri = uri.ToString();
string mappedUri;
if ((tempUri.Contains("SharePhotoContent")) && (tempUri.Contains("FileId")))
{
// Redirect to PhotoShare.xaml.
mappedUri = tempUri.Replace("MainPage", "SharePage");
return new Uri(mappedUri, UriKind.Relative);
}
return uri;
}
}
After redirecting the user to the sharing page, we can use a method called GetPictureFromToken() exposed by the MediaLibrary class. It accepts the unique picture ID as a parameter and returns a reference to the Picture object that represents the image selected by the user.
The picture ID is the parameter called FileId that we received in the URI when the application was opened. In the following sample, you can see how we retrieve the parameter by using the OnNavigatedTo event which is triggered when the user is redirected to the sharing page, and use it to display the selected picture with an Image control.
There are other ways to integrate our application with the Photos Hub. They all work the same way:
A declaration must be added to the manifest file.
The application is opened using a special Uri that you need to intercept with a UriMapper class.
The user is redirected to a dedicated page in which you can retrieve the selected image by using the FileId parameter.
List the Application as a Photographic App
This is the simplest integration since it just displays the application in the Apps section of the Photos Hub. To support it, you simply have to add the following declaration in the manifest file:
Nothing else is required since this kind of integration will simply include a quick link in the Photos Hub. The application will be opened normally, as if it was opened using the main app icon.
Integrating With the Edit Option
Another option available in the Application Bar of the photo details page is called edit. When the user taps it, Windows Phone displays a list of applications that support photo editing. After choosing one, the user expects to be redirected to an application page where the selected picture can be edited.
The following declaration should be added in the manifest file:
This is the Uri to intercept to redirect users to the proper page where you’ll be able to retrieve the selected image by using the FileId parameter, as we did for the photo sharing feature.
Rich Media Apps
Rich media apps are applications that are able to take pictures and save them in the user’s library. When users open one of these photos, they will see:
text under the photo with the message “captured by” followed by the app’s name
a new option in the Application Bar called “open in” followed by the app’s name
This approach is similar to the sharing and editing features. The difference is that the rich media apps integration is available only for pictures taken within the application, while editing and sharing features are available for every photo, regardless of how they were taken.
The following declaration should be added in the manifest to enable rich media app integration:
As you can see, the URI is always the same; what changes is the value of the Action parameter—in this case, RichMediaEdit.
This is the URI you need to intercept with your UriMapper implementation. You’ll need to redirect users to a page of your application that is able to manage the selected picture.
Conclusion
In this tutorial, we’ve learned many ways to create a great multimedia application for Windows Phone by:
integrating camera features to take photos and record videos
interacting with the media library to get access to pictures and audio
integrating with the native camera experience to give users access to advanced features directly in the Photos Hub
This tutorial represents a chapter from Windows Phone 8 Succinctly, a free eBook from the team at Syncfusion.
In this tutorial, we're going to focus on how to create a multimedia application for Windows Phone by taking advantage of the device's camera, interacting with the media library, and exploring the possibilities of the Photos Hub.
Using the Camera
The camera is one of the most important features in Windows Phone devices, especially thanks to Nokia, which has created some of the best camera phones available on the market.
As developers, we are able to integrate the camera experience into our application so that users can take pictures and edit them directly within the application. In addition, with the Lens App feature we’ll discuss later, it’s even easier to create applications that can replace the native camera experience.
Note: To interact with the camera, you need to enable the ID_CAP_IS_CAMERA capability in the manifest file.
The first step is to create an area on the page where we can display the image recorded by the camera. We’re going to use VideoBrush, which is one of the native XAML brushes that is able to embed a video. We’ll use it as a background of a Canvas control, as shown in the following sample:
Notice the CompositeTransform that has been applied; its purpose is to keep the correct orientation of the video, based on the camera orientation.
Taking Pictures
Now that we have a place to display the live camera feed, we can use the APIs that are included in the Windows.Phone.Media.Capture namespace. Specifically, the class available to take pictures is called PhotoCaptureDevice (later we’ll see another class for recording videos).
Before initializing the live feed, we need to make two choices: which camera to use, and which of the available resolutions we want to use.
We achieve this by calling the GetAvailableCaptureResolutions() method on the PhotoCaptureDevice class, passing as parameter a CameraSensorLocation object which represents the camera we’re going to use. The method will return a collection of the supported resolutions, which are identified by the Size class.
Tip: It’s safe to use the previous code because every Windows Phone device has a back camera. If we want to interact with the front camera instead, it’s better to check whether one is available first since not all the Windows Phone devices have one. To do this, you can use the AvailableSensorLocation property of the PhotoCaptureDevice class, which is a collection of all the supported cameras.
Once we’ve decided which resolution to use, we can pass it as a parameter (together again with the selected camera) to the OpenAsync() method of the PhotoCaptureDevice class. It will return a PhotoCaptureDevice object which contains the live feed; we simply have to pass it to the SetSource() method of the VideoBrush.
As already mentioned, we handle the camera orientation using the transformation we’ve applied to the VideoBrush: we set the Rotation using the SensorRotationInDegrees property that contains the current angle’s rotation.
Note: You may get an error when you try to pass a PhotoCaptureDevice object as a parameter of the SetSource() method of the VideoBrush. If so, you’ll have to add the Microsoft.Devices namespace to your class, since it contains an extension method for the SetSource() method that supports the PhotoCaptureDevice class.
Now the application will simply display the live feed of the camera on the screen. The next step is to take the picture.
The technique used by the API is to create a sequence of frames and save them as a stream. Unfortunately, there’s a limitation in the current SDK: you’ll only be able to take one picture at a time, so you’ll only be able to use sequences made by one frame.
The process starts with a CameraCaptureSequence object, which represents the capture stream. Due to the single-picutre limitation previously mentioned, you’ll be able to call the CreateCaptureSequence() method of the PhotoCaptureDevice class only by passing 1 as its parameter.
For the same reason, we’re just going to work with the first frame of the sequence that is stored inside the Frames collection. The CaptureStream property of the frame needs to be set with the stream that we’re going to use to store the captured image. In the previous sample, we use a MemoryStream to store the photo in memory. This way, we can save it later in the user’s Photos Hub (specifically, in the Camera Roll album).
Note: To interact with the MediaLibrary class you need to enable the ID_CAP_MEDIALIB_PHOTO capability in the manifest file.
You can also customize many settings of the camera by calling the SetProperty() method on the PhotoCaptureDevice object that requires two parameters: the property to set, and the value to assign. The available properties are defined by two enumerators: KnownCameraGeneralProperties, which contains the general camera properties, and KnownCameraPhotoProperties, which contains the photo-specific properties.
Some properties are read-only, so the only operation you can perform is get their values by using the GetProperty() method.
In the following samples, we use the SetProperty() method to set the flash mode and GetProperty() to get the information if the current region forces phones to play a sound when they take a picture.
Note that the GetProperty() method always returns a generic object, so you’ll have to manually cast it according to the properties you’re querying.
You can see a list of all the available properties in the MSDN documentation.
Using the Hardware Camera Key
Typically, Windows Phone devices have a dedicated button for the camera, which can be used both to set the focus by half-pressing it, and to take the picture by fully pressing it. You are also able to use this button in your applications by subscribing to three events that are exposed by the CameraButtons static class:
ShutterKeyPressed is triggered when the button is pressed.
ShutterKeyReleased is triggered when the button is released.
ShutterKeyHalfPressed is triggered when the button is half-pressed.
In the following sample, we subscribe to the ShutterKeyReleased event to take a picture and the ShutterKeyHalfPressed event to use the auto-focus feature.
The process to record a video is similar to the one we used to take a picture. In this case, we’re going to use the AudioVideoCaptureDevice class instead of the PhotoCaptureDevice class. As you can see in the following sample, the initialization procedure is the same: we decide which resolution and camera we want to use, and we display the returned live feed using a VideoBrush.
Note: To record videos, you’ll also need to enable the ID_CAP_MICROPHONE capability in the manifest file.
Recording a video is even simpler since the AudioVideoCaptureDevice class exposes the StartRecordingToStreamAsync() method, which simply requires you to specify where to save the recorded data. Since it’s a video, you’ll also need a way to stop the recording; this is the purpose of the StopRecordingAsync() method.
In the following sample, the recording is stored in a file created in the local storage:
You can easily test the result of the operation by using the MediaPlayerLauncher class to play the recording:
private void OnPlayVideoClicked(object sender, RoutedEventArgs e)
{
MediaPlayerLauncher launcher = new MediaPlayerLauncher
{
Media = new Uri(file.Path, UriKind.Relative)
};
launcher.Show();
}
The SDK offers a specific list of customizable settings connected to video recording. They are available in the KnownCameraAudioVideoProperties enumerator.
Interacting With the Media Library
The framework offers a class called MediaLibrary, which can be used to interact with the user media library (photos, music, etc.). Let’s see how to use it to manage the most common scenarios.
Note: In the current version, there’s no way to interact with the library to save new videos in the Camera Roll, nor to get access to the stream of existing videos.
Pictures
The MediaLibrary class can be used to get access to the pictures stored in the Photos Hub, thanks to the Pictures collection. It’s a collection of Picture objects, where each one represents a picture stored in the Photos Hub.
Note: You’ll need to enable the ID_CAP_MEDIALIB_PHOTO capability in the manifest file to get access to the pictures stored in the Photos Hub.
The Pictures collection grants access to the following albums:
Camera Roll
Saved Pictures
Screenshots
All other albums displayed in the People Hub that come from remote services like SkyDrive or Facebook can’t be accessed using the MediaLibrary class.
Tip: The MediaLibrary class exposes a collection called SavedPictures, which contains only the pictures that are stored in the Saved Pictures album.
Every Picture object offers some properties to get access to the basic info, like Name, Width, and Height. A very important property is Album, which contains the reference of the album where the image is stored. In addition, you’ll be able to get access to different streams in case you want to manipulate the image or display it in your application:
The GetPicture() method returns the stream of the original image.
The GetThumbnail() method returns the stream of the thumbnail, which is a low-resolution version of the original image.
If you add the PhoneExtensions namespace to your class, you’ll be able to use the GetPreviewImage() method, which returns a preview picture. Its resolution and size are between the original image and the thumbnail.
In the following sample, we generate the thumbnail of the first available picture in the Camera Roll and display it using an Image control:
Tip: To interact with the MediaLibrary class using the emulator, you’ll have to open the Photos Hub at least once; otherwise you will get an empty collection of pictures when you query the Pictures property.
With the MediaLibrary class, you’ll also be able to do the opposite: take a picture in your application and save it in the People Hub. We’ve already seen a sample when we talked about integrating the camera in our application; we can save the picture in the Camera Roll (using the SavePictureToCameraRoll() method) or in the Saved Pictures album (using the SavePicture() method). In both cases, the required parameters are the name of the image and its stream.
In the following sample, we download an image from the Internet and save it in the Saved Pictures album:
The MediaLibrary class offers many options for accessing music, but there are some limitations that aren’t present when working with pictures.
Note: You’ll need to enable the ID_CAP_MEDIALIB_AUDIO capability in the manifest file to get access to the pictures stored in the Photos Hub.
The following collections are exposed by the MediaLibrary class for accessing music:
Albums to get access to music albums.
Songs to get access to all the available songs.
Genres to get access to the songs grouped by genre.
Playlists to get access to playlists.
Every song is identified by the Song class, which contains all the common information about a music track taken directly from the ID3 tag: Album, Artist, Title, TrackNumber, and so on.
Unfortunately, there’s no access to a song’s stream, so the only way to play tracks is by using the MediaPlayer class, which is part of the Microsoft.XNA.Framework.Media namespace. This class exposes many methods to interact with tracks. The Play() method accepts as a parameter a Song object, retrieved from the MediaLibrary.
In the following sample, we reproduce the first song available in the library:
private void OnPlaySong(object sender, RoutedEventArgs e)
{
MediaLibrary library = new MediaLibrary();
Song song = library.Songs.FirstOrDefault();
MediaPlayer.Play(song);
}
One of the new features introduced in Windows Phone 8 allows you to save a song stored in the application’s local storage to the media library so that it can be played by the native Music + Videos Hub. This requires the Microsoft.Xna.Framework.Media.PhoneExtensions namespace to be added to your class.
private async void OnDownloadMusicClicked(object sender, RoutedEventArgs e)
{
MediaLibrary library = new MediaLibrary();
SongMetadata metadata = new SongMetadata
{
AlbumName = "A rush of blood to the head",
ArtistName = "Coldplay",
Name = "Clocks"
};
library.SaveSong(new Uri("song.mp3", UriKind.RelativeOrAbsolute), metadata, SaveSongOperation.CopyToLibrary);
}
The SaveSong() method requires three parameters, as shown in the previous sample:
The path of the song to save. It’s a relative path that points to the local storage.
The song metadata, which is identified by the SongMetadata class. It’s an optional parameter; if you pass null, Windows Phone will automatically extract the ID3 information from the file.
A SaveSongOperation object, which tells the media library if the file should be copied (CopyToLibrary) or moved (MoveToLibrary) so that it’s deleted from the storage.
Lens Apps
Windows Phone 8 has introduced new features specific to photographic applications. Some of the most interesting are called lens apps, which apply different filters and effects to pictures. Windows Phone offers a way to easily switch between different camera applications to apply filters on the fly.
Lens apps are regular Windows Phone applications that interact with the Camera APIs we used at the beginning of this article. The difference is that a lens app is displayed in the lenses section of the native Camera app; when users press the camera button, a special view with all the available lens apps is displayed. This way, they can easily switch to another application to take the picture.
Integration with the lenses view starts from the manifest file, which must be manually edited by choosing the View code option in the context menu. The following code has to be added in the Extension section:
Every lens app needs a specific icon that is displayed in the lenses view. Icons are automatically retrieved from the Assets folder based on a naming convention. An icon must be added for every supported resolution using the conventions in the following table:
Resolution
Icon size
File name
480 × 800
173 × 173
Lens.Screen-WVGA.png
768 × 1280
277 × 277
Lens.Screen-WXGA.png
720 × 1280
259 × 259
Lens.Screen-720p.png
The UriMapper class is required for working with lens apps. In fact, lens apps are opened using a special URI that has to be intercepted and managed. The following code is a sample Uri:
/MainPage.xaml?Action=ViewfinderLaunch
When this Uri is intercepted, users should be redirected to the application page that takes the picture. In the following sample, you can see a UriMapper implementation that redirects users to a page called Camera.xaml when the application is opened from the lens view.
public class MyUriMapper : UriMapperBase
{
public override Uri MapUri(Uri uri)
{
string tempUri = uri.ToString();
if (tempUri.Contains(“ViewfinderLaunch”))
{
return new Uri(“/Camera.xaml”, UriKind.Relative);
}
else
{
return uri;
}
}
}
Support Sharing
If you’ve developed an application that supports photo sharing such as a social network client, you can integrate it in the Share menu of the Photos Hub. Users can find this option in the Application Bar in the photo details page.
When users choose this option, Windows Phone displays a list of applications that are registered to support sharing. We can add our application to the list simply by adding a new extension in the manifest file, as we did to add lens support.
We have to manually add the following declaration in the Extensions section:
Again, we can use a UriMapper implementation to redirect users to our application’s page that offers the sharing feature. It’s also important to carry the FiledId parameter in this page; we’re going to need it to know which photo has been selected by the user.
The following sample shows a UriMapper implementation that simply replaces the name of the original page (MainPage.xaml) with the name of the destination page (SharePage.xaml):
public class MyUriMapper: UriMapperBase
{
public override Uri MapUri(Uri uri)
{
string tempUri = uri.ToString();
string mappedUri;
if ((tempUri.Contains("SharePhotoContent")) && (tempUri.Contains("FileId")))
{
// Redirect to PhotoShare.xaml.
mappedUri = tempUri.Replace("MainPage", "SharePage");
return new Uri(mappedUri, UriKind.Relative);
}
return uri;
}
}
After redirecting the user to the sharing page, we can use a method called GetPictureFromToken() exposed by the MediaLibrary class. It accepts the unique picture ID as a parameter and returns a reference to the Picture object that represents the image selected by the user.
The picture ID is the parameter called FileId that we received in the URI when the application was opened. In the following sample, you can see how we retrieve the parameter by using the OnNavigatedTo event which is triggered when the user is redirected to the sharing page, and use it to display the selected picture with an Image control.
There are other ways to integrate our application with the Photos Hub. They all work the same way:
A declaration must be added to the manifest file.
The application is opened using a special Uri that you need to intercept with a UriMapper class.
The user is redirected to a dedicated page in which you can retrieve the selected image by using the FileId parameter.
List the Application as a Photographic App
This is the simplest integration since it just displays the application in the Apps section of the Photos Hub. To support it, you simply have to add the following declaration in the manifest file:
Nothing else is required since this kind of integration will simply include a quick link in the Photos Hub. The application will be opened normally, as if it was opened using the main app icon.
Integrating With the Edit Option
Another option available in the Application Bar of the photo details page is called edit. When the user taps it, Windows Phone displays a list of applications that support photo editing. After choosing one, the user expects to be redirected to an application page where the selected picture can be edited.
The following declaration should be added in the manifest file:
This is the Uri to intercept to redirect users to the proper page where you’ll be able to retrieve the selected image by using the FileId parameter, as we did for the photo sharing feature.
Rich Media Apps
Rich media apps are applications that are able to take pictures and save them in the user’s library. When users open one of these photos, they will see:
text under the photo with the message “captured by” followed by the app’s name
a new option in the Application Bar called “open in” followed by the app’s name
This approach is similar to the sharing and editing features. The difference is that the rich media apps integration is available only for pictures taken within the application, while editing and sharing features are available for every photo, regardless of how they were taken.
The following declaration should be added in the manifest to enable rich media app integration:
As you can see, the URI is always the same; what changes is the value of the Action parameter—in this case, RichMediaEdit.
This is the URI you need to intercept with your UriMapper implementation. You’ll need to redirect users to a page of your application that is able to manage the selected picture.
Conclusion
In this tutorial, we’ve learned many ways to create a great multimedia application for Windows Phone by:
integrating camera features to take photos and record videos
interacting with the media library to get access to pictures and audio
integrating with the native camera experience to give users access to advanced features directly in the Photos Hub
This tutorial represents a chapter from Windows Phone 8 Succinctly, a free eBook from the team at Syncfusion.
The new layout system introduced by Apple in WatchKit last November is a completely new concept for iOS and OS X developers. It isn't based on Auto Layout and it's much simpler.
In this tutorial, I'll show you the main features—and limitations—of this new layout system. We won't be writing any code, because the focus is on understanding the mechanism of the new layout system. In the end, you should be able to start building application interfaces using the WatchKit layout system.
1. What's So Cool About WatchKit?
WatchKit doesn't use the same layout system as normal iOS applications. It is much smarter and easier. You must use storyboards to design your interfaces in this case.
You don't have access to the positions of your elements at runtime and you are required to design static interfaces that are included in your app bundle. You can even forget about x and y coordinates, bounds, and frames, because everything is laid out in the storyboard. Let's create an example app to help you better understand these new concepts.
2. Create Your First WatchKit App
Step 1: Create Project
Open Xcode 6.2+ and create a new project. Choose the Single View Application template to start with. Name it WatchKitLayoutDemo, click Next, and save it somewhere on your computer.
Step 2: Add WatchKit Target
It's time to add the WatchKit target to the project. Go to menu File > New > Target... and select Apple Watch on the left. Choose WatchKit App and click Next.
In the following screen, you can configure your WatchKit target. Uncheck Include Notification Scene and Include Glance Scene, because I will only focus on a simple WatchKit app in this tutorial. Click Finish to add the WatchKit app to the project.
Step 3: Explore the WatchKit Targets
You may notice that Xcode added two targets to your project. To make it easier for us, Xcode has created a group for each target, containing the source files and assets for each target.
Expand these groups in the Project Navigator on the left. The blue group (see below) contains the source files and assets of the WatchKit extension, which will run on the iPhone. The Apple Watch doesn't run your application. The paired iPhone does the heavy lifting for the Apple Watch. The Apple Watch only renders the user interface and handles any user interaction. This concept is explained in more detail in this Tuts+ article.
The red group contains the assets of the WatchKit application, such as the storyboard file that will be stored and used on the Apple Watch. This is done because the resources would be expensive to send every time the user opens an app and would drain the battery much faster.
This also means that the app's user interface is static and can't be changed at runtime. Adding or removing elements, for example, isn't possible. You can show and hide user interface elements though. If you, for example, set the hidden property of a group to YES at runtime—or true if you love Swift—, the group will be hidden and the other user interface elements will be automatically repositioned.
In this tutorial, I'll show you the powerful layout used by WatchKit. You won't need to write any code. Let's focus on the WatchKitLayoutDemo WatchKit App group, which contains the storyboard file.
3. Storyboard
Select the Interface.storyboard file to open it. If you are coming from the iOS or OS X world, you should be familiar with storyboards. As I previously mentioned, storyboards are the only way to design WatchKit apps. Auto Layout is absent and manipulating frames is not possible using the WatchKit framework.
The UIKit's UIViewController class is absent in WatchKit. Instead, WatchKit declares the WKInterfaceController class. You can see that Xcode already added an interface controller for us.
The WatchKit framework defines a range of user interface elements that you can use to create your app's user interface. This is a complete list of the elements you can use:
Group
Table
Image
Separator
Button
Switch
Slider
Label
Date
Timer
Map
Menu
Menu Item
Most of these don't need explaining, but there are quite a few new elements, such as group, separator, date, timer, and menu. One of the most important elements is the group.
If you ever used HTML and CSS to create a website, you may be familiar with the <div> tag. You can think of a group as a container for other interface elements. A group has many properties that you can customize directly in Interface Builder.
Step 1: Define the Layout of Your App
It's important to plan the layout in detail before starting development. This will save you hours and hours of headaches if, at some point, you realize that the really cool feature you wanted to build isn't possible or doesn't look good on a physical device. Make sure you have read the Apple Watch Human Interface Guidelines.
For this example, I'm going to teach how to create a layout for an hotel app in which you can find hotels near your current location. I design the screen that will show the details for a particular hotel. As I mentioned in the introduction, I won't write any code. Instead I will focus on understanding the mechanism of the new layout system.
Leaving out my drawing skills, this is what I have in mind for my layout. The hotel name will be at the top of the screen and below it will be some star icons showing the hotel's rating. I then want to add an image along with its address and two buttons.
Step 2: Adding a Group
Our interface controller is empty at the moment and there is no base group. To add new elements, drag and drop them from the Object Library on the right into the Interface Controller. The Scene Navigator on the left is useful to check if the elements are correctly positioned. The first thing to do is to add a group, which will allow us to scroll vertically if the content doesn't fit the screen. Drag a group from the Object Library and drop it into the Interface Controller as shown below.
Now that you have a group in your interface controller, you can see its attributes in the Attributes Inspector on the right. Let's look at some of them in more detail.
Layout: The layout determines if the group's elements are laid out horizontally or vertically. When you add an element, it'll be positioned next to or below the previous one.
Insets: This attribute determines the top, bottom, left, and right inset for the group.
Spacing: As its name implies, it determines the spacing between the elements within the group.
Background: You can set an image as the background of the group and animate it by naming the images sequentially.
Position: The position attribute determines the horizontal (left, center, right) and vertical (top, center, bottom) position of the group.
Size: The size attribute determines the width and height of the element. There are three values, Size to Fit Content (automatically adjusted based on the content), Relative to Container (takes the container size and multiplies it by the value defined), Fixed (constant value).
Keep in mind that Apple Watch comes in two sizes. You should use the same layout in both cases, but you may run into some small differences. By clicking the plus icon on the left of an attribute, you can set an attribute that will only be applied when the app runs on the specified device.
Let's continue building our layout. Change the Group Layout to Vertical so that the content will scroll vertically when I add more elements. Set the Horizontal position to Center so that the content will be centered. Finally, set the Width attribute to Width Relative to Container with the multiplier set to 1. This will expand the group to fill the entire screen width.
Step 3: Adding a Label
Now that we have set up the main properties for our container group, let's add a label to the group. From the Object Library, add a label to the group you added a moment ago. If you select the label, you'll see how its width doesn't take up all the available space. Let's fix that by changing its width attribute to Relative to Container. To center the label, change the Horizontal attribute to Center and set Text Alignment to Center.
What happens if the hotel's name is too long? I want it to expand and grow vertically. To do that, change the Lines attributeof the label to 0. This means that the name of the hotel will span multiple lines if necessary. Change the label's text to see the result for yourself. The result should look like the below screenshot.
Step 4: Adding Stars
We also want to show the hotel's rating. The idea is to have a group just below the hotel's name with the number of stars of the hotel. Add another group to the group we already have. In other words, the new group is nested within the first group.
I want the five stars to be on the same line and centered. As I previously mentioned, I can't add or remove objects at runtime, but I can hide and show objects. I will add five images to the group. If the hotel has fewer stars, I will hide them at runtime.
Drag five images into the nested group and set the width of each star to Relative to Container. Change the multiplier from 1 to 0.2. The reason for choosing 0.2 as the multiplier is simple. If I want five images to fit in the available space on the same line, I want each image to be 20% of the group's width. Change the Horizontal position to Center so that they'll always be centered, no matter how many stars there are.
Next, let's assign a cool image to each image. You can find the images I use in the source files of this tutorial. Set the Image attribute to star.png and change the mode to Aspect Fit to ensure the aspect ratio is respected.
The result should look similar to the animated image below. You can even try to check the Hidden property of one of the images in the Attributes Inspector and see how the stars are always centered.
Step 5: Adding the Hotel Image
Start by downloading the example image of an hotel from freeimages. I want to add an image of the hotel to show the user what the hotel looks like. Add a new image from the Object Library as you just did earlier for the stars. Change the Image attribute to the image you downloaded and set the Mode to Aspect Fit.
Change the Horizontal position to Center and the Width to Relative to Container. Always make sure to add the image as a nested element of the main group by checking the layer hierarchy in the Scene Navigator on the left. Set Height be Size to Fit Content to automatically resize the image based on the image's dimensions.
Step 6: Adding the Address
Below the image, I'd like to add an address label. We could also add a map, but let's use a label for this example. Drag a label from the Object Library and position it below the hotel image. Set Lines to 0 and Width to Relative to Container. Change the text to be a random address of your choice.
As you may have noticed, the interface controller is now taller. It automatically resizes in the storyboard so you can see its content.
Step 7: Adding Buttons
The interface controller should have two buttons at the bottom. I want the buttons to be half the width of the screen and positioned side by side. Because our main group has a vertical layout, we need to add a nested group so the buttons are positioned horizontally instead of vertically.
Add a new group as shown below and add two buttons to it. Set their Width attribute to Relative to Container and set the multiplier to 0.5. Set the Vertical position of the two buttons to Center to center them vertically.
Set the text of the first button to "From $99" and the background color to a nice looking red. Set the text of the second button to "View More" and the background color to blue. The interface controller should now look like this:
Make sure you have selected the correct scheme and press Command-R to run the WatchKit application.
When the iOS Simulator opens, there is one more thing you need to do. Select the iOS Simulator and choose Hardware > External Displays > Apple Watch 42 mm. The Apple Watch Simulator will appear next to your iPhone Simulator. You can now see your working layout in action. See the result in the video below.
Conclusion
In this tutorial, I showed you the main features and concepts to build complex layouts in WatchKit. We explored adding and positioning user interface elements, and a few best practices. You are now able to turn your Apple Watch app ideas into reality. I hope you enjoyed this tutorial.
The new layout system introduced by Apple in WatchKit last November is a completely new concept for iOS and OS X developers. It isn't based on Auto Layout and it's much simpler.
In this tutorial, I'll show you the main features—and limitations—of this new layout system. We won't be writing any code, because the focus is on understanding the mechanism of the new layout system. In the end, you should be able to start building application interfaces using the WatchKit layout system.
1. What's So Cool About WatchKit?
WatchKit doesn't use the same layout system as normal iOS applications. It is much smarter and easier. You must use storyboards to design your interfaces in this case.
You don't have access to the positions of your elements at runtime and you are required to design static interfaces that are included in your app bundle. You can even forget about x and y coordinates, bounds, and frames, because everything is laid out in the storyboard. Let's create an example app to help you better understand these new concepts.
2. Create Your First WatchKit App
Step 1: Create Project
Open Xcode 6.2+ and create a new project. Choose the Single View Application template to start with. Name it WatchKitLayoutDemo, click Next, and save it somewhere on your computer.
Step 2: Add WatchKit Target
It's time to add the WatchKit target to the project. Go to menu File > New > Target... and select Apple Watch on the left. Choose WatchKit App and click Next.
In the following screen, you can configure your WatchKit target. Uncheck Include Notification Scene and Include Glance Scene, because I will only focus on a simple WatchKit app in this tutorial. Click Finish to add the WatchKit app to the project.
Step 3: Explore the WatchKit Targets
You may notice that Xcode added two targets to your project. To make it easier for us, Xcode has created a group for each target, containing the source files and assets for each target.
Expand these groups in the Project Navigator on the left. The blue group (see below) contains the source files and assets of the WatchKit extension, which will run on the iPhone. The Apple Watch doesn't run your application. The paired iPhone does the heavy lifting for the Apple Watch. The Apple Watch only renders the user interface and handles any user interaction. This concept is explained in more detail in this Tuts+ article.
The red group contains the assets of the WatchKit application, such as the storyboard file that will be stored and used on the Apple Watch. This is done because the resources would be expensive to send every time the user opens an app and would drain the battery much faster.
This also means that the app's user interface is static and can't be changed at runtime. Adding or removing elements, for example, isn't possible. You can show and hide user interface elements though. If you, for example, set the hidden property of a group to YES at runtime—or true if you love Swift—, the group will be hidden and the other user interface elements will be automatically repositioned.
In this tutorial, I'll show you the powerful layout used by WatchKit. You won't need to write any code. Let's focus on the WatchKitLayoutDemo WatchKit App group, which contains the storyboard file.
3. Storyboard
Select the Interface.storyboard file to open it. If you are coming from the iOS or OS X world, you should be familiar with storyboards. As I previously mentioned, storyboards are the only way to design WatchKit apps. Auto Layout is absent and manipulating frames is not possible using the WatchKit framework.
The UIKit's UIViewController class is absent in WatchKit. Instead, WatchKit declares the WKInterfaceController class. You can see that Xcode already added an interface controller for us.
The WatchKit framework defines a range of user interface elements that you can use to create your app's user interface. This is a complete list of the elements you can use:
Group
Table
Image
Separator
Button
Switch
Slider
Label
Date
Timer
Map
Menu
Menu Item
Most of these don't need explaining, but there are quite a few new elements, such as group, separator, date, timer, and menu. One of the most important elements is the group.
If you ever used HTML and CSS to create a website, you may be familiar with the <div> tag. You can think of a group as a container for other interface elements. A group has many properties that you can customize directly in Interface Builder.
Step 1: Define the Layout of Your App
It's important to plan the layout in detail before starting development. This will save you hours and hours of headaches if, at some point, you realize that the really cool feature you wanted to build isn't possible or doesn't look good on a physical device. Make sure you have read the Apple Watch Human Interface Guidelines.
For this example, I'm going to teach how to create a layout for an hotel app in which you can find hotels near your current location. I design the screen that will show the details for a particular hotel. As I mentioned in the introduction, I won't write any code. Instead I will focus on understanding the mechanism of the new layout system.
Leaving out my drawing skills, this is what I have in mind for my layout. The hotel name will be at the top of the screen and below it will be some star icons showing the hotel's rating. I then want to add an image along with its address and two buttons.
Step 2: Adding a Group
Our interface controller is empty at the moment and there is no base group. To add new elements, drag and drop them from the Object Library on the right into the Interface Controller. The Scene Navigator on the left is useful to check if the elements are correctly positioned. The first thing to do is to add a group, which will allow us to scroll vertically if the content doesn't fit the screen. Drag a group from the Object Library and drop it into the Interface Controller as shown below.
Now that you have a group in your interface controller, you can see its attributes in the Attributes Inspector on the right. Let's look at some of them in more detail.
Layout: The layout determines if the group's elements are laid out horizontally or vertically. When you add an element, it'll be positioned next to or below the previous one.
Insets: This attribute determines the top, bottom, left, and right inset for the group.
Spacing: As its name implies, it determines the spacing between the elements within the group.
Background: You can set an image as the background of the group and animate it by naming the images sequentially.
Position: The position attribute determines the horizontal (left, center, right) and vertical (top, center, bottom) position of the group.
Size: The size attribute determines the width and height of the element. There are three values, Size to Fit Content (automatically adjusted based on the content), Relative to Container (takes the container size and multiplies it by the value defined), Fixed (constant value).
Keep in mind that Apple Watch comes in two sizes. You should use the same layout in both cases, but you may run into some small differences. By clicking the plus icon on the left of an attribute, you can set an attribute that will only be applied when the app runs on the specified device.
Let's continue building our layout. Change the Group Layout to Vertical so that the content will scroll vertically when I add more elements. Set the Horizontal position to Center so that the content will be centered. Finally, set the Width attribute to Width Relative to Container with the multiplier set to 1. This will expand the group to fill the entire screen width.
Step 3: Adding a Label
Now that we have set up the main properties for our container group, let's add a label to the group. From the Object Library, add a label to the group you added a moment ago. If you select the label, you'll see how its width doesn't take up all the available space. Let's fix that by changing its width attribute to Relative to Container. To center the label, change the Horizontal attribute to Center and set Text Alignment to Center.
What happens if the hotel's name is too long? I want it to expand and grow vertically. To do that, change the Lines attributeof the label to 0. This means that the name of the hotel will span multiple lines if necessary. Change the label's text to see the result for yourself. The result should look like the below screenshot.
Step 4: Adding Stars
We also want to show the hotel's rating. The idea is to have a group just below the hotel's name with the number of stars of the hotel. Add another group to the group we already have. In other words, the new group is nested within the first group.
I want the five stars to be on the same line and centered. As I previously mentioned, I can't add or remove objects at runtime, but I can hide and show objects. I will add five images to the group. If the hotel has fewer stars, I will hide them at runtime.
Drag five images into the nested group and set the width of each star to Relative to Container. Change the multiplier from 1 to 0.2. The reason for choosing 0.2 as the multiplier is simple. If I want five images to fit in the available space on the same line, I want each image to be 20% of the group's width. Change the Horizontal position to Center so that they'll always be centered, no matter how many stars there are.
Next, let's assign a cool image to each image. You can find the images I use in the source files of this tutorial. Set the Image attribute to star.png and change the mode to Aspect Fit to ensure the aspect ratio is respected.
The result should look similar to the animated image below. You can even try to check the Hidden property of one of the images in the Attributes Inspector and see how the stars are always centered.
Step 5: Adding the Hotel Image
Start by downloading the example image of an hotel from freeimages. I want to add an image of the hotel to show the user what the hotel looks like. Add a new image from the Object Library as you just did earlier for the stars. Change the Image attribute to the image you downloaded and set the Mode to Aspect Fit.
Change the Horizontal position to Center and the Width to Relative to Container. Always make sure to add the image as a nested element of the main group by checking the layer hierarchy in the Scene Navigator on the left. Set Height be Size to Fit Content to automatically resize the image based on the image's dimensions.
Step 6: Adding the Address
Below the image, I'd like to add an address label. We could also add a map, but let's use a label for this example. Drag a label from the Object Library and position it below the hotel image. Set Lines to 0 and Width to Relative to Container. Change the text to be a random address of your choice.
As you may have noticed, the interface controller is now taller. It automatically resizes in the storyboard so you can see its content.
Step 7: Adding Buttons
The interface controller should have two buttons at the bottom. I want the buttons to be half the width of the screen and positioned side by side. Because our main group has a vertical layout, we need to add a nested group so the buttons are positioned horizontally instead of vertically.
Add a new group as shown below and add two buttons to it. Set their Width attribute to Relative to Container and set the multiplier to 0.5. Set the Vertical position of the two buttons to Center to center them vertically.
Set the text of the first button to "From $99" and the background color to a nice looking red. Set the text of the second button to "View More" and the background color to blue. The interface controller should now look like this:
Make sure you have selected the correct scheme and press Command-R to run the WatchKit application.
When the iOS Simulator opens, there is one more thing you need to do. Select the iOS Simulator and choose Hardware > External Displays > Apple Watch 42 mm. The Apple Watch Simulator will appear next to your iPhone Simulator. You can now see your working layout in action. See the result in the video below.
Conclusion
In this tutorial, I showed you the main features and concepts to build complex layouts in WatchKit. We explored adding and positioning user interface elements, and a few best practices. You are now able to turn your Apple Watch app ideas into reality. I hope you enjoyed this tutorial.
One of the
most interesting aspects of the Material
Design specifications is the visual
continuity between activities. With just a few lines of code, the new Lollipop APIs allow you to meaningfully transition
between two activities, thanks to seamless and continuous animations. This breaks the classic activity boundaries of the previous Android versions and
allows the user to understand how elements go from one point to another.
In this tutorial, I will show you how to achieve this result, making a sample application
consistent with Google’s Material Design guidelines.
Prerequisites
In this tutorial, I'll assume that you are already familiar with Android development and that you use Android Studio as your IDE. I'll use Android intents extensively, assuming a basic knowledge of the activity lifecycle, and the new RecyclerView widget introduced with API 21, last June. I'm not going to dive into the details of this class, but, if you're interested, you can find a great explanation in this Tuts+ tutorial.
1. Create the First Activity
The basic structure
of the application is straightforward. There are two activities, a main
one, MainActivity.java, whose task it is to display a list of items, and a second one, DetailActivity.java, which will show the details of the item
selected in the previous list.
Step 1: The RecyclerView Widget
To show the list of items, the main activity will use the RecyclerViewwidget introduced in Android Lollipop. The first thing you need to do is, add the following line to the dependencies section in your project’s
build.grade file to enable backward
compatibility:
compile 'com.android.support:recyclerview-v7:+'
Step 2: Data Definition
For the
sake of brevity, we will not define an actual database or a similar source of
data for the application. Instead, we will use a custom class, Contact. Each item
will have a name, a color, and basic contact information associated to it. This is what the implementation of the Contact class looks like:
public class Contact {
// The fields associated to the person
private final String mName, mPhone, mEmail, mCity, mColor;
Contact(String name, String color, String phone, String email, String city) {
mName = name; mColor = color; mPhone = phone; mEmail = email; mCity = city;
}
// This method allows to get the item associated to a particular id,
// uniquely generated by the method getId defined below
public static Contact getItem(int id) {
for (Contact item : CONTACTS) {
if (item.getId() == id) {
return item;
}
}
return null;
}
// since mName and mPhone combined are surely unique,
// we don't need to add another id field
public int getId() {
return mName.hashCode() + mPhone.hashCode();
}
public static enum Field {
NAME, COLOR, PHONE, EMAIL, CITY
}
public String get(Field f) {
switch (f) {
case COLOR: return mColor;
case PHONE: return mPhone;
case EMAIL: return mEmail;
case CITY: return mCity;
case NAME: default: return mName;
}
}
}
You will end up with a nice container for the information you care about. But we need to fill it with some data. At the top of
the Contactclass, add the following piece of code to populate the data set.
By defining the data as public and static, every class in the project is able to read it. In a sense, we mimic the behavior of a database with the exception that we are hardcoding it into a class.
public static final Contact[] CONTACTS = new Contact[] {
new Contact("John", "#33b5e5", "+01 123456789", "john@example.com", "Venice"),
new Contact("Valter", "#ffbb33", "+01 987654321", "valter@example.com", "Bologna"),
new Contact("Eadwine", "#ff4444", "+01 123456789", "eadwin@example.com", "Verona"),
new Contact("Teddy", "#99cc00", "+01 987654321", "teddy@example.com", "Rome"),
new Contact("Ives", "#33b5e5", "+01 11235813", "ives@example.com", "Milan"),
new Contact("Alajos", "#ffbb33", "+01 123456789", "alajos@example.com", "Bologna"),
new Contact("Gianluca", "#ff4444", "+01 11235813", "me@gian.lu", "Padova"),
new Contact("Fane", "#99cc00", "+01 987654321", "fane@example.com", "Venice"),
};
Step 3: Defining the Main Layouts
The
layout of the main activity is simple, because the list will fill the entire screen. The layout includes a RelativeLayout as the root—but it can just as well be a LinearLayout too—and a RecyclerView as its only child.
Because theRecyclerView widget arranges subelements and nothing more, you also need to design the layout
of a single list item. We want to have a colored circle to the left of each item of the contact list so you first have to define the drawable circle.xml.
We have almost arrived at the end of the first part of the tutorial. You still have to write the RecyclerView.ViewHolder and the RecyclerView.Adapter, and assign everything to the associated view in the onCreate method of the main activity. In this case, the RecyclerView.ViewHolder must also be able to handle clicks so you will need to add a specific class capable of doing so. Let's start defining the class responsible for click handling.
public class RecyclerClickListener implements RecyclerView.OnItemTouchListener {
private OnItemClickListener mListener;
GestureDetector mGestureDetector;
public interface OnItemClickListener {
public void onItemClick(View view, int position);
}
public RecyclerClickListener(Context context, OnItemClickListener listener) {
mListener = listener;
mGestureDetector = new GestureDetector(context, new GestureDetector.SimpleOnGestureListener() {
@Override public boolean onSingleTapUp(MotionEvent e) {
return true;
}
});
}
@Override public boolean onInterceptTouchEvent(RecyclerView view, MotionEvent e) {
View childView = view.findChildViewUnder(e.getX(), e.getY());
if (childView != null && mListener != null && mGestureDetector.onTouchEvent(e)) {
mListener.onItemClick(childView, view.getChildPosition(childView));
return true;
}
return false;
}
@Override public void onTouchEvent(RecyclerView view, MotionEvent motionEvent) { }
}
It is necessary to specify the RecyclerView.Adapter, which I will call it DataManager. It is responsible for loading the data and inserting it into the
views of the list. This data manager class will also contain the definition of the RecyclerView.ViewHolder.
public class DataManager extends RecyclerView.Adapter<DataManager.RecyclerViewHolder> {
public static class RecyclerViewHolder extends RecyclerView.ViewHolder {
TextView mName, mPhone;
View mCircle;
RecyclerViewHolder(View itemView) {
super(itemView);
mName = (TextView) itemView.findViewById(R.id.CONTACT_name);
mPhone = (TextView) itemView.findViewById(R.id.CONTACT_phone);
mCircle = itemView.findViewById(R.id.CONTACT_circle);
}
}
@Override
public RecyclerViewHolder onCreateViewHolder(ViewGroup viewGroup, int i) {
View v = LayoutInflater.from(viewGroup.getContext()).inflate(R.layout.contact_item, viewGroup, false);
return new RecyclerViewHolder(v);
}
@Override
public void onBindViewHolder(RecyclerViewHolder viewHolder, int i) {
// get the single element from the main array
final Contact contact = Contact.CONTACTS[i];
// Set the values
viewHolder.mName.setText(contact.get(Contact.Field.NAME));
viewHolder.mPhone.setText(contact.get(Contact.Field.PHONE));
// Set the color of the shape
GradientDrawable bgShape = (GradientDrawable) viewHolder.mCircle.getBackground();
bgShape.setColor(Color.parseColor(contact.get(Contact.Field.COLOR)));
}
@Override
public int getItemCount() {
return Contact.CONTACTS.length;
}
}
Finally, add the following code to the onCreatemethod,
below setContentView. The main activity is ready.
RecyclerView rv = (RecyclerView) findViewById(R.id.rv); // layout reference
LinearLayoutManager llm = new LinearLayoutManager(this);
rv.setLayoutManager(llm);
rv.setHasFixedSize(true); // to improve performance
rv.setAdapter(new DataManager()); // the data manager is assigner to the RV
rv.addOnItemTouchListener( // and the click is handled
new RecyclerClickListener(this, new RecyclerClickListener.OnItemClickListener() {
@Override public void onItemClick(View view, int position) {
// STUB:
// The click on the item must be handled
}
}));
This is what the application looks like if you build and run it.
2. Create
the Details Activity
Step 1: The Layout
The second
activity is much simpler. It takes the ID of the contact selected and retrieves
the additional information that the first activity doesn't show.
From a design point of view, the layout of this activity is critical since it's the most important part of the application. But for what concerns the XML, it's trivial. The layout is a series of TextView instances positioned in a pleasant way, using RelativeLayout and LinearLayout. This is what the layout looks like:
Since the
two activities are linked by an intent, you need to send some piece of information that allows the second activity to understand of whichcontact you requested the details.
One option may
be using the position variable as a reference. The position of the element in
the list corresponds to the position of the element in the array so there
should be nothing bad in using this integer as a unique reference.
This would work, but if you take this approach and, for whatever reason, the data set is modified at runtime, the reference won't match the contact you're interested in. This is the reason why it is better
to use an IDad hoc. This information is the getId method defined in the Contact class.
Edit the onItemClick handler of the list of items as shown below.
@Override public void onItemClick(View view, int position) {
Intent intent = new Intent(MainActivity.this, DetailsActivity.class);
intent.putExtra(DetailsActivity.ID, Contact.CONTACTS[position].getId());
startActivity(intent);
}
The DetailsActivity will receive the information from the Intent extras and construct the correct object using the ID as a reference. This is shown in the following code block.
// Before the onCreate
public final static String ID = "ID";
public Contact mContact;
// In the onCreate, after the setContentView method
mContact = Contact.getItem(getIntent().getIntExtra(ID, 0));
Just as before in the onCreateViewHolder method of the RecylerView, the views are initialized
using the findViewById method and populated using setText. For example, to configure the name field we do the following:
The
process is the same for the other fields. The second
activity is finally ready.
3. Meaningful Transitions
We
have finally arrived at the core of the tutorial, animating the two activities using the new Lollipop method for transitioning using a
shared element.
Step 1: Configure Your Project
The first
thing you will need to do is edit your theme in the style.xml file in the values-v21 folder. In this way, you enable content transitions and set
the entrance and the exit of the views that are not shared between the two activities.
Please note that your
project must be targeted to (and thus be compiled with) at least Android API 21.
The animations will be ignoredon systems that don't have
Lollipop installed. Unfortunately, because of performance reasons, the AppCompat library
does not provide complete backward compatibility for these animations.
Step 2: Assign the Transition Name in the Layout Files
Once you've edited your style.xml file, you have
to point out the relationshipbetween the two common elements of the
views.
In our example, the shared views are the field containing the name of
the contact, the one of the phone number, and the colored circle. For
each of them, you have to specify a common transition name. For this reason,
start adding in the strings.xml resource
file the following items:
Then,
for each of the three pairs, in the layout files add the android:transitionName
attribute with the corresponding value. For the colored circle, the code looks like this:
<!— In the single item layout: the item we are transitioning *from* —><View
android:id=“@+id/CONTACT_circle”
android:transitionName=“@string/transition_name_circle”
android:layout_width=“40dp”
android:layout_height=“40dp”
android:background=“@drawable/circle”
android:layout_centerVertical=“true”
android:layout_alignParentLeft=“true”/>
<!— In the details activity: the item we are transitioning *to* —><View
android:id=“@+id/DETAILS_circle”
android:transitionName=“@string/transition_name_circle”
android:layout_width=“48dp”
android:layout_height=“48dp”
android:background=“@drawable/circle”
android:layout_centerVertical=“true”
android:layout_alignParentLeft=“true”/>
Thanks to
this attribute, Android will know which views are shared between the two
activities and will correctly animate the transition. Repeat the same process
for the other two views.
Step 3: Configure the Intent
From
a coding point of view, you will need to attach a specific ActivityOptions bundle to the intent. The method you need is makeSceneTransitionAnimation, which
takes as parameters the context of the application and as many shared elements
as we need. In the onItemClick method of
the RecyclerView, edit the previously defined Intent like this:
@Override public void onItemClick(View view, int position) {
Intent intent = new Intent(MainActivity.this, DetailsActivity.class);
intent.putExtra(DetailsActivity.ID, Contact.CONTACTS[position].getId());
ActivityOptionsCompat options = ActivityOptionsCompat.makeSceneTransitionAnimation(
// the context of the activity
MainActivity.this,
// For each shared element, add to this method a new Pair item,
// which contains the reference of the view we are transitioning *from*,
// and the value of the transitionName attribute
new Pair<View, String>(view.findViewById(R.id.CONTACT_circle),
getString(R.string.transition_name_circle)),
new Pair<View, String>(view.findViewById(R.id.CONTACT_name),
getString(R.string.transition_name_name)),
new Pair<View, String>(view.findViewById(R.id.CONTACT_phone),
getString(R.string.transition_name_phone))
);
ActivityCompat.startActivity(MainActivity.this, intent, options.toBundle());
}
For each shared element to be animated, you will have to add to the makeSceneTransitionAnimation method a new Pair item. Each Pair has two values, the first is a reference to the view you are transitioning from, the second is the value of the transitionName attribute.
Be careful when importing the Pair class. You will need to include the android.support.v4.util package, not the android.util package. Also,
remember to use ActivityCompat.startActivity method instead of the startActivity method, because otherwise you will not be able to run your application
on environments with API below 16.
That's it. You’re done. It's as
simple as that.
Conclusion
In this
tutorial you learned how to beautifully and seamlessly transition between two
activities that share one or more common elements, allowing for a visually
pleasant and meaningful continuity.
You started by making the first of the
two activities, whose role it is to display the list of contacts. You then completed
the second activity, designing its layout, and implementing a way to pass a unique
reference between the two activities. Finally, you looked at the way in which makeSceneTransitionAnimation works, thanks to the XML transitionName attribute.
Bonus
Tip: Stylistic Details
To
create a true Material Design looking application, as shown in the previous screenshots, you will also need to change
the colors of your theme. Edit your base theme in the values-v21 folder to
achieve a nice result.
One of the
most interesting aspects of the Material
Design specifications is the visual
continuity between activities. With just a few lines of code, the new Lollipop APIs allow you to meaningfully transition
between two activities, thanks to seamless and continuous animations. This breaks the classic activity boundaries of the previous Android versions and
allows the user to understand how elements go from one point to another.
In this tutorial, I will show you how to achieve this result, making a sample application
consistent with Google’s Material Design guidelines.
Prerequisites
In this tutorial, I'll assume that you are already familiar with Android development and that you use Android Studio as your IDE. I'll use Android intents extensively, assuming a basic knowledge of the activity lifecycle, and the new RecyclerView widget introduced with API 21, last June. I'm not going to dive into the details of this class, but, if you're interested, you can find a great explanation in this Tuts+ tutorial.
1. Create the First Activity
The basic structure
of the application is straightforward. There are two activities, a main
one, MainActivity.java, whose task it is to display a list of items, and a second one, DetailActivity.java, which will show the details of the item
selected in the previous list.
Step 1: The RecyclerView Widget
To show the list of items, the main activity will use the RecyclerViewwidget introduced in Android Lollipop. The first thing you need to do is, add the following line to the dependencies section in your project’s
build.grade file to enable backward
compatibility:
compile 'com.android.support:recyclerview-v7:+'
Step 2: Data Definition
For the
sake of brevity, we will not define an actual database or a similar source of
data for the application. Instead, we will use a custom class, Contact. Each item
will have a name, a color, and basic contact information associated to it. This is what the implementation of the Contact class looks like:
public class Contact {
// The fields associated to the person
private final String mName, mPhone, mEmail, mCity, mColor;
Contact(String name, String color, String phone, String email, String city) {
mName = name; mColor = color; mPhone = phone; mEmail = email; mCity = city;
}
// This method allows to get the item associated to a particular id,
// uniquely generated by the method getId defined below
public static Contact getItem(int id) {
for (Contact item : CONTACTS) {
if (item.getId() == id) {
return item;
}
}
return null;
}
// since mName and mPhone combined are surely unique,
// we don't need to add another id field
public int getId() {
return mName.hashCode() + mPhone.hashCode();
}
public static enum Field {
NAME, COLOR, PHONE, EMAIL, CITY
}
public String get(Field f) {
switch (f) {
case COLOR: return mColor;
case PHONE: return mPhone;
case EMAIL: return mEmail;
case CITY: return mCity;
case NAME: default: return mName;
}
}
}
You will end up with a nice container for the information you care about. But we need to fill it with some data. At the top of
the Contactclass, add the following piece of code to populate the data set.
By defining the data as public and static, every class in the project is able to read it. In a sense, we mimic the behavior of a database with the exception that we are hardcoding it into a class.
public static final Contact[] CONTACTS = new Contact[] {
new Contact("John", "#33b5e5", "+01 123456789", "john@example.com", "Venice"),
new Contact("Valter", "#ffbb33", "+01 987654321", "valter@example.com", "Bologna"),
new Contact("Eadwine", "#ff4444", "+01 123456789", "eadwin@example.com", "Verona"),
new Contact("Teddy", "#99cc00", "+01 987654321", "teddy@example.com", "Rome"),
new Contact("Ives", "#33b5e5", "+01 11235813", "ives@example.com", "Milan"),
new Contact("Alajos", "#ffbb33", "+01 123456789", "alajos@example.com", "Bologna"),
new Contact("Gianluca", "#ff4444", "+01 11235813", "me@gian.lu", "Padova"),
new Contact("Fane", "#99cc00", "+01 987654321", "fane@example.com", "Venice"),
};
Step 3: Defining the Main Layouts
The
layout of the main activity is simple, because the list will fill the entire screen. The layout includes a RelativeLayout as the root—but it can just as well be a LinearLayout too—and a RecyclerView as its only child.
Because theRecyclerView widget arranges subelements and nothing more, you also need to design the layout
of a single list item. We want to have a colored circle to the left of each item of the contact list so you first have to define the drawable circle.xml.
We have almost arrived at the end of the first part of the tutorial. You still have to write the RecyclerView.ViewHolder and the RecyclerView.Adapter, and assign everything to the associated view in the onCreate method of the main activity. In this case, the RecyclerView.ViewHolder must also be able to handle clicks so you will need to add a specific class capable of doing so. Let's start defining the class responsible for click handling.
public class RecyclerClickListener implements RecyclerView.OnItemTouchListener {
private OnItemClickListener mListener;
GestureDetector mGestureDetector;
public interface OnItemClickListener {
public void onItemClick(View view, int position);
}
public RecyclerClickListener(Context context, OnItemClickListener listener) {
mListener = listener;
mGestureDetector = new GestureDetector(context, new GestureDetector.SimpleOnGestureListener() {
@Override public boolean onSingleTapUp(MotionEvent e) {
return true;
}
});
}
@Override public boolean onInterceptTouchEvent(RecyclerView view, MotionEvent e) {
View childView = view.findChildViewUnder(e.getX(), e.getY());
if (childView != null && mListener != null && mGestureDetector.onTouchEvent(e)) {
mListener.onItemClick(childView, view.getChildPosition(childView));
return true;
}
return false;
}
@Override public void onTouchEvent(RecyclerView view, MotionEvent motionEvent) { }
}
It is necessary to specify the RecyclerView.Adapter, which I will call it DataManager. It is responsible for loading the data and inserting it into the
views of the list. This data manager class will also contain the definition of the RecyclerView.ViewHolder.
public class DataManager extends RecyclerView.Adapter<DataManager.RecyclerViewHolder> {
public static class RecyclerViewHolder extends RecyclerView.ViewHolder {
TextView mName, mPhone;
View mCircle;
RecyclerViewHolder(View itemView) {
super(itemView);
mName = (TextView) itemView.findViewById(R.id.CONTACT_name);
mPhone = (TextView) itemView.findViewById(R.id.CONTACT_phone);
mCircle = itemView.findViewById(R.id.CONTACT_circle);
}
}
@Override
public RecyclerViewHolder onCreateViewHolder(ViewGroup viewGroup, int i) {
View v = LayoutInflater.from(viewGroup.getContext()).inflate(R.layout.contact_item, viewGroup, false);
return new RecyclerViewHolder(v);
}
@Override
public void onBindViewHolder(RecyclerViewHolder viewHolder, int i) {
// get the single element from the main array
final Contact contact = Contact.CONTACTS[i];
// Set the values
viewHolder.mName.setText(contact.get(Contact.Field.NAME));
viewHolder.mPhone.setText(contact.get(Contact.Field.PHONE));
// Set the color of the shape
GradientDrawable bgShape = (GradientDrawable) viewHolder.mCircle.getBackground();
bgShape.setColor(Color.parseColor(contact.get(Contact.Field.COLOR)));
}
@Override
public int getItemCount() {
return Contact.CONTACTS.length;
}
}
Finally, add the following code to the onCreatemethod,
below setContentView. The main activity is ready.
RecyclerView rv = (RecyclerView) findViewById(R.id.rv); // layout reference
LinearLayoutManager llm = new LinearLayoutManager(this);
rv.setLayoutManager(llm);
rv.setHasFixedSize(true); // to improve performance
rv.setAdapter(new DataManager()); // the data manager is assigner to the RV
rv.addOnItemTouchListener( // and the click is handled
new RecyclerClickListener(this, new RecyclerClickListener.OnItemClickListener() {
@Override public void onItemClick(View view, int position) {
// STUB:
// The click on the item must be handled
}
}));
This is what the application looks like if you build and run it.
2. Create
the Details Activity
Step 1: The Layout
The second
activity is much simpler. It takes the ID of the contact selected and retrieves
the additional information that the first activity doesn't show.
From a design point of view, the layout of this activity is critical since it's the most important part of the application. But for what concerns the XML, it's trivial. The layout is a series of TextView instances positioned in a pleasant way, using RelativeLayout and LinearLayout. This is what the layout looks like:
Since the
two activities are linked by an intent, you need to send some piece of information that allows the second activity to understand of whichcontact you requested the details.
One option may
be using the position variable as a reference. The position of the element in
the list corresponds to the position of the element in the array so there
should be nothing bad in using this integer as a unique reference.
This would work, but if you take this approach and, for whatever reason, the data set is modified at runtime, the reference won't match the contact you're interested in. This is the reason why it is better
to use an IDad hoc. This information is the getId method defined in the Contact class.
Edit the onItemClick handler of the list of items as shown below.
@Override public void onItemClick(View view, int position) {
Intent intent = new Intent(MainActivity.this, DetailsActivity.class);
intent.putExtra(DetailsActivity.ID, Contact.CONTACTS[position].getId());
startActivity(intent);
}
The DetailsActivity will receive the information from the Intent extras and construct the correct object using the ID as a reference. This is shown in the following code block.
// Before the onCreate
public final static String ID = "ID";
public Contact mContact;
// In the onCreate, after the setContentView method
mContact = Contact.getItem(getIntent().getIntExtra(ID, 0));
Just as before in the onCreateViewHolder method of the RecylerView, the views are initialized
using the findViewById method and populated using setText. For example, to configure the name field we do the following:
The
process is the same for the other fields. The second
activity is finally ready.
3. Meaningful Transitions
We
have finally arrived at the core of the tutorial, animating the two activities using the new Lollipop method for transitioning using a
shared element.
Step 1: Configure Your Project
The first
thing you will need to do is edit your theme in the style.xml file in the values-v21 folder. In this way, you enable content transitions and set
the entrance and the exit of the views that are not shared between the two activities.
Please note that your
project must be targeted to (and thus be compiled with) at least Android API 21.
The animations will be ignoredon systems that don't have
Lollipop installed. Unfortunately, because of performance reasons, the AppCompat library
does not provide complete backward compatibility for these animations.
Step 2: Assign the Transition Name in the Layout Files
Once you've edited your style.xml file, you have
to point out the relationshipbetween the two common elements of the
views.
In our example, the shared views are the field containing the name of
the contact, the one of the phone number, and the colored circle. For
each of them, you have to specify a common transition name. For this reason,
start adding in the strings.xml resource
file the following items:
Then,
for each of the three pairs, in the layout files add the android:transitionName
attribute with the corresponding value. For the colored circle, the code looks like this:
<!— In the single item layout: the item we are transitioning *from* —><View
android:id=“@+id/CONTACT_circle”
android:transitionName=“@string/transition_name_circle”
android:layout_width=“40dp”
android:layout_height=“40dp”
android:background=“@drawable/circle”
android:layout_centerVertical=“true”
android:layout_alignParentLeft=“true”/>
<!— In the details activity: the item we are transitioning *to* —><View
android:id=“@+id/DETAILS_circle”
android:transitionName=“@string/transition_name_circle”
android:layout_width=“48dp”
android:layout_height=“48dp”
android:background=“@drawable/circle”
android:layout_centerVertical=“true”
android:layout_alignParentLeft=“true”/>
Thanks to
this attribute, Android will know which views are shared between the two
activities and will correctly animate the transition. Repeat the same process
for the other two views.
Step 3: Configure the Intent
From
a coding point of view, you will need to attach a specific ActivityOptions bundle to the intent. The method you need is makeSceneTransitionAnimation, which
takes as parameters the context of the application and as many shared elements
as we need. In the onItemClick method of
the RecyclerView, edit the previously defined Intent like this:
@Override public void onItemClick(View view, int position) {
Intent intent = new Intent(MainActivity.this, DetailsActivity.class);
intent.putExtra(DetailsActivity.ID, Contact.CONTACTS[position].getId());
ActivityOptionsCompat options = ActivityOptionsCompat.makeSceneTransitionAnimation(
// the context of the activity
MainActivity.this,
// For each shared element, add to this method a new Pair item,
// which contains the reference of the view we are transitioning *from*,
// and the value of the transitionName attribute
new Pair<View, String>(view.findViewById(R.id.CONTACT_circle),
getString(R.string.transition_name_circle)),
new Pair<View, String>(view.findViewById(R.id.CONTACT_name),
getString(R.string.transition_name_name)),
new Pair<View, String>(view.findViewById(R.id.CONTACT_phone),
getString(R.string.transition_name_phone))
);
ActivityCompat.startActivity(MainActivity.this, intent, options.toBundle());
}
For each shared element to be animated, you will have to add to the makeSceneTransitionAnimation method a new Pair item. Each Pair has two values, the first is a reference to the view you are transitioning from, the second is the value of the transitionName attribute.
Be careful when importing the Pair class. You will need to include the android.support.v4.util package, not the android.util package. Also,
remember to use ActivityCompat.startActivity method instead of the startActivity method, because otherwise you will not be able to run your application
on environments with API below 16.
That's it. You’re done. It's as
simple as that.
Conclusion
In this
tutorial you learned how to beautifully and seamlessly transition between two
activities that share one or more common elements, allowing for a visually
pleasant and meaningful continuity.
You started by making the first of the
two activities, whose role it is to display the list of contacts. You then completed
the second activity, designing its layout, and implementing a way to pass a unique
reference between the two activities. Finally, you looked at the way in which makeSceneTransitionAnimation works, thanks to the XML transitionName attribute.
Bonus
Tip: Stylistic Details
To
create a true Material Design looking application, as shown in the previous screenshots, you will also need to change
the colors of your theme. Edit your base theme in the values-v21 folder to
achieve a nice result.
In the previous part of this series, we made the invaders move, the player and invaders fire bullets, and implemented collision detection. In the fourth and final part of this series, we will add the ability to move the player using the accelerometer, manage the levels, and ensure the player dies when hit by a bullet. Let's get started.
1. Finishing the Player Class
Step 1: Adding Properties
Add the following properties to the Player class below the canFire property.
The invincible property will be used to make the player temporarily invincible when it loses a life. The lives property is the number of of lives the player has before being killed.
We are using a property observer on the lives property, which will be called each time its value is set. The didSet observer is called immediately after the new value of the property is set. By doing this, each time we decrement the lives property it automatically checks if lives is less than zero, calling the kill method if it is. If the player has lives left, the respawn method is invoked. Property observers are very handy and can save a lot of extra code.
Step 2: respawn
The respawn method makes the player invincible for a short amount of time and fades the player in and out to indicate that it is temporarily invincible. The implementation of the respawn method looks like this:
func respawn(){
invincible = true
let fadeOutAction = SKAction.fadeOutWithDuration(0.4)
let fadeInAction = SKAction.fadeInWithDuration(0.4)
let fadeOutIn = SKAction.sequence([fadeOutAction,fadeInAction])
let fadeOutInAction = SKAction.repeatAction(fadeOutIn, count: 5)
let setInvicibleFalse = SKAction.runBlock(){
self.invincible = false
}
runAction(SKAction.sequence([fadeOutInAction,setInvicibleFalse]))
}
We set invincible to true and create a number of SKAction objects. By now, you should be familiar with how the SKAction class works.
Step 3: die
The die method is fairly simple. It checks whether invincible is false and, if it is, decrements the lives variable.
The kill method resets invaderNum to 1 and takes the user back to the StartGameScene so they can begin a new game.
func kill(){
invaderNum = 1
let gameOverScene = StartGameScene(size: self.scene!.size)
gameOverScene.scaleMode = self.scene!.scaleMode
let transitionType = SKTransition.flipHorizontalWithDuration(0.5)
self.scene!.view!.presentScene(gameOverScene,transition: transitionType)
}
This code should be familiar to you as it is nearly identical to the code we used to move to the GameScene from the StartGameScene. Note that we force unwrap the scene to access the scene's size and scaleMode properties.
This completes the Player class. We now need to call the die and kill methods in the didBeginContact(_:) method.
We can now test everything. A quick way to test the die method is by commenting out the moveInvaders call in the update(_:) method. After the player dies and respawns three times, you should be taken back to the StartGameScene.
To test the kill method, make sure the moveInvaders call is not commented out. Set the invaderSpeed property to a high value, for example, 200. The invaders should reach the player very quickly, which results in an instant kill. Change invaderSpeed back to 2 once you're finished testing.
2. Finishing Firing Invaders
As the game stands right now, only the bottom row of invaders can fire bullets. We already have the collision detection for when a player bullet hits an invader. In this step, we will remove an invader that is hit by a bullet and add the invader one row up to the array of invaders that can fire. Add the following to the didBeginContact(_:) method.
func didBeginContact(contact: SKPhysicsContact) {
...
if ((firstBody.categoryBitMask & CollisionCategories.Invader != 0) &&
(secondBody.categoryBitMask & CollisionCategories.PlayerBullet != 0)){
if (contact.bodyA.node?.parent == nil || contact.bodyB.node?.parent == nil) {
return
}
let invadersPerRow = invaderNum * 2 + 1
let theInvader = firstBody.node? as Invader
let newInvaderRow = theInvader.invaderRow - 1
let newInvaderColumn = theInvader.invaderColumn
if(newInvaderRow >= 1){
self.enumerateChildNodesWithName("invader") { node, stop in
let invader = node as Invader
if invader.invaderRow == newInvaderRow && invader.invaderColumn == newInvaderColumn{
self.invadersWhoCanFire.append(invader)
stop.memory = true
}
}
}
let invaderIndex = findIndex(invadersWhoCanFire,valueToFind: firstBody.node? as Invader)
if(invaderIndex != nil){
invadersWhoCanFire.removeAtIndex(invaderIndex!)
}
theInvader.removeFromParent()
secondBody.node?.removeFromParent()
}
}
We've removed the NSLog statement and first check if contact.bodyA.node?.parent and contact.bodyB.node?.parent are not nil. They will be nil if we have already processed this contact. In that case, we return from the function.
We calculate the invadersPerRow as we have done before and set theInvader to firstBody.node?, casting it to an Invader. Next, we get the newInvaderRow by subtracting 1 and the newInvaderColumn, which stays the same.
We only want to enable invaders to fire if the newInvaderRow is greater than or equal to 1, otherwise we would be trying to set an invader in row 0 to be able to fire. There is no row 0 so this would cause an error.
Next, we enumerate through the invaders, looking for the invader that has the correct row and column. Once it is found, we append it to the invadersWhoCanFire array and call stop.memory to true so the enumeration will stop early.
We need to find the invader that was hit with a bullet in the invadersWhoCanFire array so we can remove it. Normally, arrays have some kind of functionality like an indexOf method or something similar to accomplish this. At the time of writing, there is no such method for arrays in the Swift language. The Swift Standard Library defines a find function that we could use, but I found a method in the sections on generics in the Swift Programming Language Guide that will accomplish what we need. The function is aptly named findIndex. Add the following to the bottom of GameScene.swift.
func findIndex<T: Equatable>(array: [T], valueToFind: T) -> Int? {
for (index, value) in enumerate(array) {
if value == valueToFind {
return index
}
}
return nil
}
If you are curious about how it this function works, then I recommend you read more about generics in the Swift Programming Language Guide.
Now that we have a method we can use to find the invader, we invoke it, passing in the invadersWhoCanFire array and theInvader. We check if invaderIndex isn't equal to nil and remove the invader from the invadersWhoCanFire array using the removeAtIndex(index: Int) method.
You can now test whether it works as it should. An easy way would be to comment out where the call to player.die in the didBeginContact(_:) method. Make sure you remove the comment when you are done testing. Notice that the program crashes if you kill all the invaders. We will fix this in the next step.
The application crashes, because we have a SKActionrepeatActionForever(_:) calling for invaders to fire bullets. At this point, there are no invaders left to fire bullets so the games crashes. We can fix this by checking the isEmpty property on the invadersWhoCanFire array. If the array is empty, the level is over. Enter the following in the fireInvaderBullet method.
The level is complete, which means we increment invaderNum, which is used for the levels. We also invoke levelComplete, which we still need to create in the steps coming up.
3. Completing a Level
We need to have a set number of levels. If we don't, after several rounds we will have so many invaders they won't fit on the screen. Add a property maxLevels to the GameScene class.
class GameScene: SKScene, SKPhysicsContactDelegate{
...
let player:Player = Player()
let maxLevels = 3
Now add the levelComplete method at the bottom of GameScene.swift.
We first check to see if invaderNum is less than or equal to the maxLevels we have set. If so, we transition to the LevelCompletScene, otherwise we reset invaderNum to 1 and call newGame. LevelCompleteScene does not exist yet nor does the newGame method so let's tackle these one at a time over the next two steps.
4. Implementing the LevelCompleteScene Class
Create a new Cocoa Touch Class named LevelCompleteScene that is a sublclass of SKScene. The implementation of the class looks like this:
import Foundation
import SpriteKit
class LevelCompleteScene:SKScene{
override func didMoveToView(view: SKView) {
self.backgroundColor = SKColor.blackColor()
let startGameButton = SKSpriteNode(imageNamed: "nextlevelbtn")
startGameButton.position = CGPointMake(size.width/2,size.height/2 - 100)
startGameButton.name = "nextlevel"
addChild(startGameButton)
}
override func touchesBegan(touches: NSSet, withEvent event: UIEvent) {
/* Called when a touch begins */
for touch: AnyObject in touches {
let touchLocation = touch.locationInNode(self)
let touchedNode = self.nodeAtPoint(touchLocation)
if(touchedNode.name == "nextlevel"){
let gameOverScene = GameScene(size: size)
gameOverScene.scaleMode = scaleMode
let transitionType = SKTransition.flipHorizontalWithDuration(0.5)
view?.presentScene(gameOverScene,transition: transitionType) }
}
}
}
The implementation is identical to the StartGameScreen class, except for we set the name property of startGameButton to "nextlevel". This code should be familiar. If not, then head back to the first part of this tutorial for a refresher.
5.newGame
The newGame method simply transitions back to the StartGameScene. Add the following to the bottom of GameScene.swift.
func newGame(){
let gameOverScene = StartGameScene(size: size)
gameOverScene.scaleMode = scaleMode
let transitionType = SKTransition.flipHorizontalWithDuration(0.5)
view?.presentScene(gameOverScene,transition: transitionType)
}
If you test the application, you can play a few levels or lose a few games, but the player has no way to move and this makes for a boring game. Let's fix that in the next step.
6. Moving the Player Using the Accelerometer
We will use the accelerometer to move the player. We first need to import the CoreMotion framework. Add an import statement for the framework at the top of GameScene.swift.
import SpriteKit
import CoreMotion
We also need a couple of new properties.
let maxLevels = 3
let motionManager: CMMotionManager = CMMotionManager()
var accelerationX: CGFloat = 0.0
Next, add a method setupAccelerometer at the bottom of GameScene.swift.
Here we set the accelerometerUpdateInterval, which is the interval in seconds for providing updates to the handler. I found 0.2 works well, you can try different values if you wish. Inside the handler; a closure, we get the accelerometerData.acceleration, which is a structure of type CMAcceleration.
struct CMAcceleration {
var x: Double
var y: Double
var z: Double
init()
init(x x: Double, y y: Double, z z: Double)
}
We are only interested in the x property and we use numeric type conversion to cast it to a CGFloat for our accelerationX property.
Now that we have the accelerationX property set, we can move the player. We do this in the didSimulatePhysics method. Add the following to the bottom of GameScene.swift.
Invoke setupAccelerometer in didMoveToView(_:) and you should be able to move the player with the accelerometer. There's only one problem. The player can move off-screen to either side and it takes a few seconds to get him back. We can fix this by using the physics engine and collisions. We do this in the next step.
As mentioned in the previous step, the player can move off-screen. This is a simple fix using Sprite Kit's physics engine. First, add a new CollisionCategory named EdgeBody.
struct CollisionCategories{
static let Invader : UInt32 = 0x1 << 0
static let Player: UInt32 = 0x1 << 1
static let InvaderBullet: UInt32 = 0x1 << 2
static let PlayerBullet: UInt32 = 0x1 << 3
static let EdgeBody: UInt32 = 0x1 << 4
}
Set this as the player's collisionBitMask in its init method.
We initialize a physics body by invoking init(edgeLoopFromRect:), passing in the scene's frame. The initializer creates an edge loop from the scene's frame. It is important to note that an edge has no volume or mass and is always treated as if the dynamic property is equal to false. Edges may also only collide with volume-based physics bodies, which our player is.
We also set the categoryBitMask to CollisionCategories.EdgeBody. If you test the application, you might notice that your ship can no longer move off-screen, but sometimes it rotates. When a physics body collides with another physics body, it is possible that this results in a rotation. This is the default behavior. To remedy this, we set allowsRotation to false in Player.swift.
The game has a moving star field in the background. We can create the start field using Sprite Kit's particle engine.
Create a new file and select Resource from the iOS section. Choose SpriteKit Particle File as the template and click Next. For the Particle template choose rain and save it as StarField. Click Create to open the file in the editor. To see the options, open the SKNode Inspector on the right right.
Instead of going through every setting here, which would take a long time, it would be better to read the documentation to learn about each individual setting. I won't go into detail about the settings of the start field either. If you are interested, open the file in Xcode and have a look at the settings I used.
Step 2: Adding the Star Field to the Scenes
Add the following to didMoveToView(_:) in StartGameScene.swift.
We use an SKEmitterNode to load the StarField.sks file, set its position and give it a low zPosition. The reason for the low zPosition is to make sure it doesn't prevent the user from tapping the start button. The particle system generates hundreds of particles so by setting it really low we overcome that problem. You should also know that you can manually configure all the particle properties on an SKEmitterNode, although it is much easier to use the editor to create an .sks file and load it at runtime.
Now add the star field to GameScene.swift and LevelCompleteScene.swift. The code is exactly the same as above.
9. Implementing the PulsatingText Class
Step 1: Create the PulsatingText Class
The StartGameScene and LevelCompleteScene have text that grows and shrinks repeatedly. We will subclass SKLabeNode and use a couple of SKAction instances to achieve this effect.
Create a New Cocoa Touch Class that is a subclass of SKLabelNode,name it PulsatingText, and add the following code to it.
One of the first things you may have noticed is that there is no initializer. If your subclass doesn't define a designated initializer, it automatically inherits all of its superclass designated initializers.
We have one method setTextFontSizeAndPulsate(theText:theFontSize:), which does exactly what it says. It sets the SKLabelNode's text and fontSize properties, and creates a number of SKAction instances to make the text scale up and then back down, creating a pulsating effect.
Step 2: Add PulsatingText to StartGameScene
Add the following code to StartGameScene.swift in didMoveToView(_:).
We initialize a PulsatingText instance, invaderText, and invoke setTextFontSizeAndPulsate(theText:theFontSize:) on it. We then set its position and add it to the scene.
Step 3: Add PulsatingText to LevelCompleteScene
Add the following code to LevelCompleteScene.swift in didMoveToView(_:).
This is exactly the same as the previous step. Only the text we are passing in is different.
10. Taking the Game Further
This completes the game. I do have some suggestions for how you could further expand upon the game. Inside the images folder, there a three different invader images. When you are adding invaders to the scene, randomly choose one of these three images. You will need to update the invader's initializer to accept an image as a parameter. Refer to the Bullet class for a hint.
There is also a UFO image. Try to make this appear and move across the screen every fifteen seconds or so. If the player hits it, give them an extra life. You may want to limit the number of lives they can have if you do this. Lastly, try to make a HUD for the players lives.
These are just some suggestions. Try and make the game your own.
Conclusion
This brings this series to a close. You should have a game that closely resembles the original Space Invaders game. I hope you found this tutorial helpful and have learned something new. Thanks for reading.
In this tutorial, we're going to focus on live apps. Live apps are one of the core concepts in Windows Phone development, and to properly create a quality experience, many factors are involved, like notifications, agents, and Tiles.
The Multitasking Approach
As we’ve seen in the application life cycle discussed earlier in this series, applications are suspended when they are not in the foreground. Every running process is terminated, so the application can’t execute operations while in the background.
There are three ways to overcome this limitation:
Push notifications, which are sent by a remote service using an HTTP channel. This approach is used to send notifications to users, update a Tile, or warn users that something has happened.
Background agents, which are services connected to our application that can run from time to time under specific conditions. These services can also be used for push notification scenarios—in this case, remote services are not involved—but they can also perform other tasks as long as they use supported APIs.
Alarms and reminders, which display reminders to the user at specific dates and times.
Let’s see in detail how they work.
Push Notifications
Push notifications are messages sent to the phone that can react in many ways based on the notification type. There are three types of push notifications:
Raw notifications can store any type of information, but they can be received only if the associated application is in the foreground.
Toast notifications are the most intrusive ones, since they display a message at the top of the screen, along with a sound and a vibration. Text messages are a good example of toast notifications.
Tile notifications can be used to update the application’s Tile.
There are three factors involved in the push notification architecture:
The Windows Phone application, which acts as a client to receive notifications.
The server application, which can be a web application or a service, that takes care of sending the notifications. Usually, the server stores a list of all the devices that are registered to receive notifications.
The Microsoft Push Notification Service (MPNS), which is a cloud service offered by Microsoft that is able to receive notifications from the server application and route them to the Windows Phone clients.
Every Windows Phone application receives push notifications using a channel, which is identified by a unique URI. The server application will send notifications to the registered clients by sending an XML string to this URI using a POST command. The MPNS will take care of routing the requests to the proper devices.
Here is a sample of a URI that represents a channel:
Note: MPNS usage is free, but limited to 500 notifications per day per single device. If you need to exceed this limitation, you have to buy a TLS digital certificate, which you’ll need to submit during the certification process and to digitally sign your server’s application. This way, you’ll also be able to support SSL to encrypt the notification’s channel.
Sending a Notification: The Server
As already mentioned, notifications are sent using an HTTP channel with a POST command. The benefit is that it relies on standard technology, so you’ll be able to create a server application with any development platform.
The HTTP request that represents a notification has the following features:
It’s defined using XML, so the content type of the request should be text/xml.
A custom header called X-WindowsPhone-Target, which contains the notification’s type (toast, Tile, or raw).
A custom header called X-NotificationClass, which is the notification’s priority (we’ll discuss this more in-depth later).
Let’s see how the different push notifications are structured in detail.
Toast Notifications
The following sample shows the XML needed to send a toast notification:
wp:Param is the optional notification deep link; when this is set, the application is opened automatically on the specified page with one or more query string parameters that can be used to identify the notification’s context.
When you prepare the request to send over HTTP, the X-WindowsPhone-Target header should be set to toast, while the X-NotificationClass header supports the following values:
2 to send the notification immediately.
12 to send the notification after 450 seconds.
22 to send the notification after 900 seconds.
Tile Notifications
Tile notifications are used to update either the main Tile or one of the secondary Tiles of the application. We won’t describe the XML needed to send the notification here: Tiles are more complex than the other notification types since Windows Phone 8 supports many templates and sizes. We’ll look at the XML that describes Tile notifications later in the Tiles section of the article.
To send a Tile notification, the X-WindowsPhone-Target header of the HTTP request should be set to tile, while the X-NotificationClass header supports the following values:
1 to send the notification immediately.
11 to send the notification after 450 seconds.
21 to send the notification after 900 seconds.
Raw Notifications
Raw notifications don’t have a specific XML definition since they can deliver any kind of data, so we can include our own definition.
To send a raw notification, the X-WindowsPhone-Target header of the HTTP request should be set to raw, while the X-NotificationClass header supports the following values:
3 to send the notification immediately.
13 to send the notification after 450 seconds.
23 to send the notification after 900 seconds.
Sending the Request and Managing the Response
The following sample code shows an example of how to send a toast notification using the HttpWebRequest class, one of the basic .NET Framework classes for performing network operations:
string toastNotificationPayloadXml = "<?xml version=\"1.0\" encoding=\"utf-8\"?>" +
"<wp:Notification xmlns:wp=\"WPNotification\">" +
"<wp:Toast>" +
"<wp:Text1> title </wp:Text1>" +
"<wp:Text2> text </wp:Text2>" +
"</wp:Toast> " +
"</wp:Notification>";
byte[] payload = Encoding.UTF8.GetBytes(toastNotificationPayloadXml);
var pushNotificationWebRequest = (HttpWebRequest)WebRequest.Create("http://sn1.notify.live.net/throttledthirdparty/01.00/AAEqbi-clyknR6iysF1QNBFpAgAAAAADAQAAAAQUZm52OkJCMjg1QTg1QkZDMkUxREQ");
pushNotificationWebRequest.Method = "POST";
pushNotificationWebRequest.ContentType = "text/xml";
var messageId = Guid.NewGuid();
pushNotificationWebRequest.Headers.Add("X-MessageID", messageId.ToString());
pushNotificationWebRequest.Headers.Add("X-WindowsPhone-Target", "toast");
pushNotificationWebRequest.Headers.Add("X-NotificationClass", "2");
pushNotificationWebRequest.ContentLength = payload.Length;
using (var notificationRequestStream = pushNotificationWebRequest.GetRequestStream())
{
notificationRequestStream.Write(payload, 0, payload.Length);
}
using (var pushNotificationWebResponse = (HttpWebResponse)pushNotificationWebRequest.GetResponse())
{
//Check the status of the response.
}
The XML definition is simply stored in a string. We’re going to change just the node values that store the notification’s title and text. Then, we start to prepare the HTTP request by using the HttpWebRequest class. We add the custom headers, define the content’s length and type (text/xml), and specify the method to use (POST).
In the end, by using the GetRequestStream() method, we get the stream location to write the request’s content, which is the notification’s XML. Then we send it by calling the GetResponse() method, which returns the status of the request. By analyzing the response we are able to tell whether or not the operation was successful.
The response’s analysis involves the status code and three custom headers:
The response’s status code returns generic information that tells you whether the request has been received. It’s based on the standard HTTP status codes. For example, 200 OK means that the request has been successfully received, while 404 Not Found means that the URI was invalid.
The X-NotificationStatus header tells you if the MPNS has received the request using the values Received, Dropped, QueueFull, and Supressed.
The X-DeviceConnectionStatus header returns the device status when the request is sent: Connected, Inactive, Disconnected, or TempDisconnected.
The X-SubscriptionStatus header returns if the channel is still valid (Active) or not (Expired). In the second case, we shouldn’t try to send it again, since it doesn’t exist anymore.
The combination of these parameters will help you understand the real status of the operation. The MSDN documentation features descriptions of all the possible combinations.
It’s important to correctly manage the notifications because MPNS doesn’t offer any automatic retry mechanism. If a notification is not delivered, MPSN won’t try again to send it, even if the operation failed for a temporary reason (for example, the device wasn’t connected to the Internet). It’s up to you to implement a retry mechanism based on the response.
PushSharp: A Push Notification Helper Library
As you can see, sending push notifications is a little bit tricky since it requires you to manually set headers, XML strings, etc. Some developers have worked on wrappers that hide the complexity of manually defining the notification by exposing high-level APIs so that you can work with classes and objects.
One of the most interesting wrappers is called PushSharp, which can be simply installed on your server project using NuGet. The biggest benefits of this library are:
It’s a generic .NET library that supports not only Windows Phone, but the most common platforms that use push notifications, like Windows Store apps, iOS, Android, and Blackberry. If you have a cross-platform application, it will make your life easier in managing a single-server application that is able to send notifications to different kinds of devices.
It’s totally compatible with Windows Phone 8, so it supports not only toast and raw notifications, but also all the new Tile templates and sizes.
The following sample shows how simple it is to send a toast notification using this library:
Every notification type is represented by a specific class, which exposes a property for every notification feature. In the previous sample, the WindowsPhoneToastNotification class offers properties to set the notification’s title, text, and deep link.
The channel URI location to send the notification is set in the EndPointUrl property. Once everything is set, you can send it by creating a PushBroker object, which represents the dispatcher that takes care of sending notifications. First, you have to register for the kind of notifications you want to send. Since we’re working with Windows Phone, we use the RegisterWindowsPhoneService() method. Then, we can queue the notification by simply passing it to the QueueNotification() method. It will be automatically sent with the priority you’ve set.
The approach is the same if you want to send a Tile. You have three different classes based on the Tile’s template, WindowsPhoneCycleTileNotification, WindowsPhoneFlipTileNotification, and WindowsPhoneIconicTileNotification; or WindowsPhoneRawNotification for a raw notification.
In the end, the PushBroker class exposes many events to control the notification life cycle, like OnNotificationSent which is triggered when a notification is successfully sent, or OnNotificationFailed which is triggered when the sending operation has failed.
Receiving Push Notifications: The Client
The base class that identifies a push notification channel is called HttpNotificationChannel and exposes many methods and events that are triggered when something connected to the channel happens.
Note: To receive push notifications you’ll need to enable the ID_CAP_PUSH_NOTIFICATION capability in the manifest file.
Every application has a single unique channel, identified by a keyword. For this reason, it should be created only the first time the application subscribes to receive notifications; if you try to create a channel that already exists, you’ll get an exception. To avoid this scenario, the HttpNotificationChannel class offers the Find() method, which returns a reference to the channel.
In the previous sample, the channel is created only if the Find() method fails and returns a null object. The HttpNotificationChannel class exposes many methods to start interacting with push notifications; they should be called only if the channel doesn’t already exist. In the sample we see the Open() method which should be called to effectively create the channel, and which automatically subscribes to raw notifications.
If we want to be able to receive toast and Tile notifications, we need to use two other methods offered by the class: BindToShellToast() and BindToShellTile(). The following sample shows a complete initialization:
Beyond offering methods, the HttpNotificationChannel class also offers some events to manage different conditions that can be triggered during the channel life cycle.
The most important one is called ChannelUriUpdated, which is triggered when the channel creation operation is completed and the MPNS has returned the URI that identifies it. This is the event in which, in a regular application, we will send the URI to the server application so that it can store it for later use. It’s important to subscribe to this event whether the channel has just been created, or already exists and has been retrieved using the Find() method. From time to time, the URI that identifies the channel can expire. In this case, the ChannelUriUpdated event is triggered again to return the new URI.
The following sample shows a full client initialization:
As you can see, the ChannelUriUpdated event returns a parameter with the ChannelUri property, which contains the information we need. In the previous sample, we just display the URI channel to the user.
There are two other events offered by the HttpNotificationChannel class that can be useful:
HttpNotificationReceived is triggered when the application has received a raw notification.
ShellToastNotificationReceived is triggered when the application receives a toast notification while it is open. By default, toast notifications are not displayed if the associated application is in the foreground.
The HttpNotificationReceived event receives, in the parameters, the object that identifies the notification. The content is stored in the Body property, which is a stream since raw notifications can store any type of data. In the following sample, we assume that the raw notification contains text and display it when it’s received:
The ShellNotificationReceived event, instead, returns in the parameters a Collection object, which contains all the XML nodes that are part of the notification. The following sample shows you how to extract the title and the description of the notification, and how to display them to the user:
If something goes wrong when you open a notification channel, you can subscribe to the ErrorOccurred event of the HttpNotificationChannel class to discover what’s happened.
The event returns a parameter that contains information about the error, like ErrorType, ErrorCode, ErrorAdditionalData, and Message.
The following list includes the most common conditions that can lead to a failure during the channel opening:
To preserve battery life and performance, Windows Phone limits the maximum number of channels that are kept alive at the same time. If the limit has been reached and you try to open a new channel, you’ll get the value ChannelOpenFailed as ErrorType.
The received notification can contain a message which is badly formatted; in this case the ErrorType will be MessageBadContent.
You can send too many notifications at the same time; in this case, they are rejected with the NotificationRateTooHigh error.
To preserve battery power, notifications can be received only if the battery isn’t critical; in this case, you’ll get a PowerLevelChanged error.
The ErrorAdditionalData property can contain additional information about the error. For example, if you get a PowerLevelChanged error, you’ll be informed of the current battery level (low, critical, or normal).
void channel_ErrorOccurred(object sender, NotificationChannelErrorEventArgs e)
{
if (e.ErrorType == ChannelErrorType.PowerLevelChanged)
{
ChannelPowerLevel level = (ChannelPowerLevel) e.ErrorAdditionalData;
switch (level)
{
case ChannelPowerLevel.LowPowerLevel:
MessageBox.Show("Battery is low");
break;
case ChannelPowerLevel.CriticalLowPowerLevel:
MessageBox.Show("Battery is critical");
break;
}
}
}
Background Agents
Push notifications are the best way to interact with the user when the application is not running since they offer the best experience and, at the same time, preserve battery life. However, the experience is limited to notifications: you can’t execute any other operation, like fetching data from a web service or reading a file from the local storage. Moreover, for certain scenarios in which you don’t require instant notifications, creating the required server infrastructure can be too expensive. Think, for example, of a weather application: it’s not critical that the forecast is updated immediately when the forecasts change.
For all these scenarios, Windows Phone 7.5 has introduced background agents, which are special services periodically executed by Windows Phone, even when the application is not running. There are two types of periodic background agents: periodic and audio. In the New Project section of Visual Studio, you’ll find many templates for all the supported agent types. In this section we’ll see how periodic agents work in detail.
Tip: Even if a background agent is a separate Visual Studio project, it shares the same resources with the foreground application. For example, they share the same local storage, so you’re able to read data created by the application in the agent, and vice versa.
Agent Limits
There are some limitations that background agents have to satisfy. The most important one is connected to timing, since agents can run only in a specific time frame for a limited amount of time. We’ll discuss this limitation later since there are some differences according to the background agent type you’re going to use.
The first limitation concerns supported APIs: only a limited number of APIs can be used in a background agent. Basically, all the APIs that are related to the user interface are prohibited since agents can’t interact with the application interface. You can find the complete list of unsupported APIs in the MSDN documentation.
The second limitation is about memory: a background agent can’t use more than 11 MB of memory, otherwise it will be terminated. It’s important to highlight that during the testing process (when the Visual Studio debugger is attached), the memory limit will be disabled, and the background agent won’t be terminated if it has used more than 11 MB. You’ll have to test it in a real environment if you want to make sure the limit isn’t reached.
The third and final limitation is about timing: a background agent is automatically disabled 14 days after it has been initialized by the connected application. There are two ways to overcome this limitation:
The user keeps using the application; the agent can be renewed for another 14 days every time the application is opened.
The agent is used to send notifications to update the main application’s Tile or the lock screen; every time the agent sends a notification it will be automatically renewed for another 14 days.
It’s important to keep in mind that if the background agent execution consecutively fails twice (because it exceeded the memory limit or raised an unmanaged exception), it’s automatically disabled; the application will have to reenable it when it’s launched.
Periodic Agents
Periodic agents are used when you need to execute small operations frequently. They are typically executed every 30 minutes (the execution interval can sometimes be shortened to every 10 minutes to coincide with other background processes to save battery life), and they can run for up to 25 seconds. Users are able to manage periodic agents from the Settings panel and disable the ones they don’t need. Periodic agents are automatically disabled if the phone is running in Battery Saver mode; they’ll be automatically restored when sufficient battery power is available.
Periodic agents are identified by the PeriodicTask class, which belongs to the Microsoft.Phone.Scheduler namespace.
Resource Intensive Agents
Resource intensive agents have been created for the opposite scenario: long-running tasks that are executed occasionally. They can run for up to 10 minutes, but only if the phone is connected to a Wi-Fi network and an external power source.
These agents are perfect for tasks like data synchronization. In fact, they are typically executed during the night, when the phone charging. Other than the previous conditions, in fact, the phone shouldn’t be in use. The lock screen should be activated and no other operations (like phone calls) should be performing.
Resource intensive agents are identified by the ResourceIntensiveTask, which is also part of the Microsoft.Phone.Scheduler namespace.
Creating a Background Agent
As already mentioned, background agents are defined in a project separate from the front-end application. Periodic agents share the same template and architecture, and the Windows Phone application will decide to register them as PeriodicTask or ResourceIntensiveTask objects.
To create a background agent, you’ll have to add a new project to the solution that contains your Windows Phone application. In the Add New Project window you’ll find a template called Windows Phone Scheduled Task Agent in the Windows Phone section.
The project already contains the class that will manage the agent; it’s called ScheduledAgent and it inherits from the ScheduledTaskAgent class. The class already implements a method and an event handler.
The method, called OnInvoke(), is the most important one. It’s the method that is triggered when the background agent is executed, so it contains the logic that performs the operations we need. The following sample shows how to send a toast notification from a background agent:
It’s important to highlight the NotifyComplete() method, which should be called as soon as the agent has completed all the operations. It notifies the operating system that the task has completed its job and that the next scheduled task can be executed. The NotifyComplete() method determines the task’s status. If it’s not called within the assigned time—25 seconds for periodic tasks or 10 minutes for resource intensive tasks—the execution is interrupted.
There’s another way to complete the agent’s execution: Abort(). This method is called when something goes wrong (for example, the required conditions to execute the agent are not satisfied) and the user needs to open the application to fix the problem.
The event handler is called UnhandledException and is triggered when an unexpected exception is raised. You can use it, for example, to log the error.
The previous sample shows you how to send local toast notifications. A toast notification is identified by the ShellToast class. You simply have to set all the supported properties (Title, Content, and optionally NavigationUri, which is the deep link). In the end, you have to call the Show() method to display it.
Like remote notifications, local toasts are supported only if the application is in the background. The previous code works only inside a background agent. If it’s executed by a foreground application, nothing happens.
Registering the Agent
The background agent is defined in a separate project, but is registered in the application. The registration should be done when the application starts, or in the settings page if we give users the option to enable or disable it within the application.
The base class to use when working with background agents is ScheduledActionService, which represents the phone’s scheduler. It takes care of registering all the background agents and maintaining them during their life cycle.
The first step is to define which type of agent you want to use. As previously mentioned, the background agent architecture is always the same; the type (periodic or resource intensive) is defined by the application.
In the first case you’ll need to create a PeriodicTask object, and in the second case, a ResourceIntensive task object. Regardless of the type, it’s important to set the Description property, which is text displayed to users in the Settings page. It’s used to explain the purpose of the agent so users can decide whether or not to keep it enabled.
PeriodicTask periodicTask = new PeriodicTask("PeriodicTask");
periodicTask.Description = "This is a periodic task";
ResourceIntensiveTask resourceIntensiveTask = new ResourceIntensiveTask("ResourceIntensiveTask");
resourceIntensiveTask.Description = "This is a resource intensive task";
In both cases, background agents are identified by a name, which is passed as a parameter of the class. This name should be unique across all the tasks registered using the PhoneApplicationService class; otherwise you’ll get an exception.
The basic operation to add a task is very simple:
public void ScheduleAgent()
{
ScheduledAction action = ScheduledActionService.Find("Agent");
if (action == null || !action.IsScheduled)
{
if (action != null)
{
ScheduledActionService.Remove("Agent");
}
PeriodicTask task = new PeriodicTask("Agent");
task.Description = "This is a periodic agent";
ScheduledActionService.Add(task);
#if DEBUG
ScheduledActionService.LaunchForTest("Agent", TimeSpan.FromSeconds(10));
#endif
}
}
The first operation checks whether the agent is already scheduled by using the Find() method of the ScheduledActionService class, which requires the task’s unique name. This operation is required if we want to extend the agent’s lifetime. If the agent does not exist yet or is not scheduled (the IsScheduled property is false), we first remove it from the scheduler and then add it since the ScheduledActionService class doesn’t offer a method to simply update a registered task. The add operation is done using the Add() method, which accepts either a PeriodicTask or a ResourceIntensiveTask object.
Now the task is scheduled and will be executed when the appropriate conditions are satisfied. If you’re in the testing phase, you’ll find the LaunchForTest() method useful; it forces the execution of an agent after a fixed amount of time. In the previous sample, the agent identified by the name PeriodicTask is launched after five seconds. The LaunchForTest() method can also be executed in the OnInvoke() event inside the background agent, allowing you to easily simulate multiple executions.
In the previous sample you can see that we’ve used conditional compilation to execute the LaunchForTest() method only when the application is launched in debug mode. This way, we make sure that when the application is compiled in release mode for publication to the Windows Store, the method won’t be executed; otherwise, you’ll get an exception if the method is called by an application installed from the Store.
Managing Errors
Background agents are good examples of the philosophy behind Windows Phone:
Users are always in control; they can disable whatever background agents they aren’t interested in through the Settings page.
Performance and battery life are two crucial factors; Windows Phone limits the maximum number of registered background agents.
For these reasons, the agent registration process can fail, so we need to manage both scenarios. The following code shows a more complete sample of a background agent’s initialization:
public void ScheduleAgent()
{
ScheduledAction action = ScheduledActionService.Find("Agent");
if (action == null || !action.IsScheduled)
{
if (action != null)
{
ScheduledActionService.Remove("Agent");
}
try
{
PeriodicTask task = new PeriodicTask("Agent");
task.Description = "This is a periodic agent";
ScheduledActionService.Add(task);
}
catch (InvalidOperationException exception)
{
if (exception.Message.Contains("BNS Error: The action is disabled"))
{
// No user action required.
}
if (exception.Message.Contains("BNS Error: The maximum number of ScheduledActions of this type have already been added."))
{
// No user action required.
}
}
}
}
The difference in the previous sample is that the Add() operation is executed inside a try / catch block. This way, we are ready to catch the InvalidOperationException error that might be raised.
We can identify the scenario by the exception message:
BNS Error: The action is disabled. The user has disabled the agent connected to our application in the Settings page. In this case, we have to warn the user to enable it again in the Settings page before trying to register it.
BSN Error: The maximum number of ScheduledActions of this type have already been added. The user has reached the maximum number of agents allowed to be installed on phone. In this case, we don’t have to do anything; Windows Phone will display a proper warning message.
Moreover, the ScheduledTask class (which is the base class that PeriodicTask and ResourceIntensiveTask inherit from) offers some properties for understanding the status of the last execution, such as LastScheduledTime which contains the date and time of the last execution, and LastExitReason which stores the last execution status.
Specifically, LastExitReason is very useful for knowing if the last execution completed successfully (Completed), if it exceeded the memory limit (MemoryQuotaExceeded) or the time limit (ExecutionTimeExceeded), or if an unhandled exception occurred (UnhandledException).
Background Audio Agent
There’s a special kind of background agent that works differently than periodic agents: audio agents, which are used in audio-related applications to keep playing audio when the app is closed. The goal is to offer a similar experience to the native Music + Videos Hub; even when the app is not in the foreground, users are able to keep listening to their music library.
Again, the background agent is defined in a different project than the foreground application. However:
The agent doesn’t need to be initialized in the foreground application using the ScheduledActionService class like we did for periodic agents.
There aren’t time limitations. The agent runs every time users interact with the music controls, and it never expires. The only limitation is that the triggered operation should complete within 30 seconds.
There is a memory limitation, but the cap is higher: 20 MB (keep in mind that the memory limit isn’t activated when the Visual Studio debugger is connected).
In this scenario, the background agent is not just a companion, but the core of the application; it manages all interactions with the music playback, regardless of whether they occur in the foreground application or the native embedded player.
Interacting With the Audio
The core class to reproduce background audio is called BackgroundAudioPlayer, which identifies the built-in Windows Phone audio player. There’s just one instance of the player within the system, and it can’t be shared. If users launch another application that uses a background audio agent (including the native Music + Videos Hub), it takes control over the audio reproduction. As we’re going to see soon, the BackgroundAudioPlayer class is used both in the foreground app and in the background agent to interact with the music playback.
The audio tracks played by a background audio agent are represented by the AudioTrack class. Each track, other than the resource to play, contains all the metadata like the title, artist, and album title.
The track’s path is set in the Source property, which can be either a remote file or a file stored in the local storage. However, most of the properties can be set directly when the AudioTrack object is created, like in the following sample:
With the previous code, in addition to setting the source file, we also immediately set information like the title, the artist, and the album. A useful available property is called PlayerControls, which can be used to set which controls (Play, Pause, Forward, etc.) are available for the track. This way, if you’re developing an application connected to an online radio, for example, you can automatically block options that are not supported (like the skip track button).
Creating the Agent
Visual Studio offers two templates to create background audio agents: Windows Phone Audio Playback Agent and Windows Phone Audio Streaming agent. They share the same purpose; their difference is that the Windows Phone Audio Streaming agent is required for working with media streaming codecs that are not natively supported by the platform.
A background audio agent’s project already comes with a class called AudioAgent, which inherits from the AudioPlayerAgent class. As we saw with periodic agents, the class automatically implements some methods that are used to interact with the agent. The most important ones are OnUserAction() and OnPlayStateChanged().
OnUserAction() is triggered every time users manually interact with the music playback, such as pausing a track or pressing the skip track button in the foreground application or the background player.
The method returns some parameters that can be used to understand the context and perform the appropriate operations:
a BackgroundAudioPlayer object, which is a reference to the background audio player
an AudioTrack object, which is a reference to the track currently playing
a UserAction object, which is the action triggered by the user
The following sample shows a typical implementation of the OnUserAction() method:
protected override void OnUserAction(BackgroundAudioPlayer player, AudioTrack track, UserAction action, object param)
{
switch (action)
{
case UserAction.Pause:
{
player.Pause();
break;
}
case UserAction.Play:
{
player.Play();
break;
}
case UserAction.SkipNext:
{
//Play next track.
break;
}
case UserAction.SkipPrevious:
{
//Play previous track.
break;
}
}
NotifyComplete();
}
Usually with a switch statement, you’ll monitor every supported user interaction, which is stored in the UserAction object. Then, you respond using the methods exposed by the BackgroundAudioPlayer class. Play and Pause are the simplest states to manage; SkipNext and SkipPrevious usually require more logic, since you have to get the previous or next track to play in the list from your library.
Note that background audio agents also require the NotifyComplete() method execution as soon as we’ve finished to manage the operation; it should be called within 30 seconds to avoid termination.
The OnPlayStateChanged() method is triggered automatically every time the music playback state is changed, but not as a direct consequence of a manual action. For example, when the current track ends, the agent should automatically start playing the next track in the list.
The method’s structure is very similar to the OnUserAction() method. In addition to a reference to the background player and the current track in this case, you’ll get a PlayState object, which notifies you about what’s going on.
The following sample shows a typical implementation of the method:
protected override void OnPlayStateChanged(BackgroundAudioPlayer player, AudioTrack track, PlayState playState)
{
if (playState == PlayState.TrackEnded)
//Play next track.
NotifyComplete();
}
Tip: Background audio agents are not kept in memory all the time, but instead are launched only when the music playback state changes. If you need to persist some data across the different executions, you’ll need to rely on the local storage.
The Foreground Application
We’ve seen how all the main playback logic is managed directly by the background agent. The foreground application, in most of the cases, is just a visual front end for the agent.
To understand the playback state (and to properly update the UI) we need to use, again, the BackgroundAudioPlayer class we’ve seen. The difference is that, in the foreground application, we need to use the Instance singleton to get access to it.
The methods exposed by the class are the same, so we can use it to play, pause, or change the music playback state (for example, if we want to connect these operations to input controls like buttons).
The BackgroundAudioPlayer exposes an important event called PlayStateChanged, which is triggered every time the playback state changes. We can use it to update the visual interface (for example, if we want to display the track currently playing).
The following sample shows how the PlayStateChanged event is used to change the behavior of the play/pause button and to display to some metadata about the currently playing track:
The previous code should be familiar; you have access to all the properties we’ve seen in the background agent, like PlayerState to identify the current playback state, or Track to identify the currently playing track. Track isn’t just a read-only property. If we want to set a new track to play in the application, we can simply assign a new AudioTrack object to the Track property of the BackgroundAudioPlayer instance.
Alarms and Reminders
Alarms and reminders are simple ways to show reminders to users at a specified date and time, as the native Alarm and Calendar applications do.
They work in the same way. The APIs belong to the Microsoft.Phone.Scheduler namespace, and they inherit from the base ScheduledNotification class. There are some properties in common between the two APIs:
Content: The reminder description.
BeginTime: The date and time the reminder should be displayed.
RecurrenceType: Sets whether it’s a recurrent or one-time reminder.
ExpirationTime: The date and time a recurrent reminder expires.
Every reminder is identified by a name, which should be unique across all the alarms and reminders created by the application. They work like background agents; their life cycle is controlled by the ScheduledActionService class, which takes care of adding, updating, and removing them.
Alarms are identified by the Alarm class and used when to show a reminder that doesn’t have a specific context. Users will be able to snooze or dismiss it. A feature specific to alarms is that they can play a custom sound, which is set in the Sound property.
The following sample shows how to create and schedule an alarm:
The sample creates an alarm that is scheduled 15 seconds after the current date and time, and uses a custom sound that is an MP3 file inside the Visual Studio project.
Reminders, instead, are identified by the Reminder class and are used when the notification is connected to a specific context, in a similar way that calendar reminders are connected to an appointment.
The context is managed using the NavigationUri property, which supports a deep link. It’s the page (with optional query string parameters) that is opened when users tap the reminder’s title.
The previous code schedules a reminder that opens a page called DetailPage.xaml. Using the navigation events described earlier in this series, you’ll be able to get the query string parameters and load the requested data. Notice also that the Reminder class offers a Title property, which is not supported by alarms.
Live Tiles
Live Tiles are, without a doubt, the most unique Windows Phone feature, and one you won’t find on any other platform. They are called Live Tiles because they aren’t simply shortcuts to open applications; they can be updated with local or remote notifications to display information without forcing users to open the application. Many kinds of applications take advantage of this feature, like weather apps that display the forecast, news apps that display the latest headlines, and movie apps that display upcoming movie titles.
Windows Phone 8 has introduced many new features regarding Tiles, like new templates and new sizes.
An application can use three different sizes for Tiles: small, medium, and wide. As developers, we’ll be able to customize the Tile’s content according to the size so that, for example, the wide Tile can display more info than the small Tile.
Windows Phone 8 has also introduced three different templates to customize a Tile: flip, cycle, and iconic. It’s important to note that you can choose only one template for your application; it must be declared in the manifest file, in the Application UI section. Once you’ve set it, you won’t be able to change it at run time, and all the Tiles you’re going to create or update will have to use that template. In addition, you can choose the features (Tiles, pictures, etc.) to use for the main Tile in the Application UI section; this information will be used until a notification updates it.
In the following sections we’ll examine every available template in detail. For each one we’ll discuss the architecture and code needed to update it with a notification. For remote notifications, we’ll see the required XML. For local notifications, we’ll look at the APIs to use in the application or in a background agent.
In both cases, all the fields that define a Tile are optional. If you don’t set some of them, those properties will simply be ignored. On the other hand, if a field that was previously set is not updated with a notification, the old value will be kept.
Flip Template
Flip is the standard Windows Phone template, and the only one that was already available in Windows Phone 7. With this template you can display text, counters, and images on the front of the Tile. Periodically, the Tile will rotate or “flip” to show the opposite side, which can display different text or images.
As you can see from the previous figure, you can customize both front and rear sides of the Tile. If you want to include an image, you have to use one of the following sizes:
Small: 159 × 159
Medium: 336 × 336
Wide: 691 × 336
A flip template Tile is identified by the FlipTileData class. The following sample shows how to use it to define a Tile that can be managed by code.
private void OnCreateFlipTileClicked(object sender, RoutedEventArgs e)
{
FlipTileData data = new FlipTileData
{
SmallBackgroundImage = new Uri("Assets/Tiles/FlipCycleTileSmall.png", UriKind.Relative),
BackgroundImage = new Uri("Assets/Tiles/FlipCycleTileMedium.png", UriKind.Relative),
WideBackgroundImage = new Uri("Assets/Tiles/FlipCycleTileLarge.png", UriKind.Relative),
Title = "Flip tile",
BackTitle = "Back flip tile",
BackContent = "This is a flip tile",
WideBackContent = "This is a flip tile with wide content",
Count = 5
};
}
The following code shows how the same Tile is represented using the XML definition needed for remote notifications:
<?xml version="1.0" encoding="utf-8"?><wp:Notification xmlns:wp="WPNotification" Version="2.0"><wp:Tile Id="[Tile ID]" Template="FlipTile"><wp:SmallBackgroundImage Action="Clear">[small Tile size URI]</wp:SmallBackgroundImage><wp:WideBackgroundImage Action="Clear">[front of wide Tile size URI]</wp:WideBackgroundImage><wp:WideBackBackgroundImage Action="Clear">[back of wide Tile size URI]</wp:WideBackBackgroundImage><wp:WideBackContent Action="Clear">[back of wide Tile size content]</wp:WideBackContent><wp:BackgroundImage Action="Clear">[front of medium Tile size URI]</wp:BackgroundImage><wp:Count Action="Clear">[count]</wp:Count><wp:Title Action="Clear">[title]</wp:Title><wp:BackBackgroundImage Action="Clear">[back of medium Tile size URI]</wp:BackBackgroundImage><wp:BackTitle Action="Clear">[back of Tile title]</wp:BackTitle><wp:BackContent Action="Clear">[back of medium Tile size content]</wp:BackContent></wp:Tile></wp:Notification>
Notice the Action attribute that is set for many nodes. If you set it without assigning a value to the node, it will simply erase the previous value so that it reverts to the default.
Cycle Template
The cycle template can be used to create a visual experience similar to the one offered by the Photos Hub. Up to nine pictures can cycle on the front side of the Tile.
The cycle template offers fewer ways to customize the Tile than the other two templates since its focus is the images. The image sizes are the same as those used for the flip template:
Small: 159 × 159
Medium: 336 × 336
Wide: 691 × 336
A cycle template is identified by the CycleTileData class, as shown in the following sample:
private void OnCreateCycleTileClicked(object sender, RoutedEventArgs e)
{
CycleTileData data = new CycleTileData()
{
Count = 5,
SmallBackgroundImage = new Uri("Assets/Tiles/FlipCycleTileSmall.png", UriKind.Relative),
Title = "Cycle tile",
CycleImages = new List<Uri>
{
new Uri("Assets/Tiles/Tile1.png", UriKind.Relative),
new Uri("Assets/Tiles/Tile2.png", UriKind.Relative),
new Uri("Assets/Tiles/Tile3.png", UriKind.Relative)
}
};
}
The following XML can used to send remote notifications to update Tiles based on the cycle template:
The iconic template is used to create Tiles that emphasize the counter. Many native applications such as Mail, Messaging, and Phone use this template. In this template, the counter is bigger and easier to see.
The iconic template features two main differences from the flip and cycle templates. The first is that full size images are not supported; instead, you can specify an icon image, which is displayed near the counter. There are just two images sizes required:
Small and Wide Tiles: 110 × 110
Medium Tile: 202 × 202
The other difference is that it’s possible to customize the background color (the only way to do this with the other templates is to use an image with the background color you prefer). If you don’t set a background color, the template will automatically use the phone’s theme.
An iconic Tile is represented by the IconicTileData template, as shown in the following sample:
private void OnCreateIconicTileClicked(object sender, RoutedEventArgs e)
{
IconicTileData data = new IconicTileData()
{
SmallIconImage = new Uri("/Assets/Tiles/IconicTileSmall.png", UriKind.Relative),
IconImage = new Uri("/Assets/Tiles/IconicTileMediumLarge.png", UriKind.Relative),
Title = "My App",
Count = 5,
WideContent1 = "First line",
WideContent2 = "Second line",
WideContent3 = "Third line"
};
}
The following sample is the XML representation for remote push notifications in a Tile that uses the iconic template:
<?xml version="1.0" encoding="utf-8"?><wp:Notification xmlns:wp="WPNotification" Version="2.0"><wp:Tile Id="[Tile ID]" Template="IconicTile"><wp:SmallIconImage Action="Clear">[small Tile size URI]</wp:SmallIconImage><wp:IconImage Action="Clear">[medium/wide Tile size URI]</wp:IconImage><wp:WideContent1 Action="Clear">[1st row of content]</wp:WideContent1><wp:WideContent2 Action="Clear">[2nd row of content]</wp:WideContent2><wp:WideContent3 Action="Clear">[3rd row of content]</wp:WideContent3><wp:Count Action="Clear">[count]</wp:Count><wp:Title Action="Clear">[title]</wp:Title><wp:BackgroundColor Action="Clear">[hex ARGB format color]</wp:BackgroundColor></wp:Tile></wp:Notification>
Working With Multiple Tiles
The previous code, in addition to being supported in the application or in a background agent to update the main Tile, can also be used to create multiple Tiles—a feature introduced in Windows Phone 7.5. Secondary Tiles behave like the main ones: they can be updated by notifications and moved or deleted from the Start screen.
The difference is that secondary Tiles have a unique ID, which is the Tile’s deep link. The main Tile always opens the application’s main page, while secondary Tiles can open another page of the application and include one or more query string parameters to identify the context. For example, a weather application can create Tiles for the user’s favorite cities, and every Tile will redirect the user to the forecast page for the selected city.
The base class to interact with Tiles is called ShellTile, which belongs to the Microsoft.Phone.Shell namespace.
Creating a secondary Tile is simple: you call the Create() method by passing the Tile’s deep link and the Tile itself, using one of the classes we’ve seen before. The following sample shows how to create a secondary Tile using the flip template:
private void OnCreateFlipTileClicked(object sender, RoutedEventArgs e)
{
FlipTileData data = new FlipTileData
{
SmallBackgroundImage = new Uri("Assets/Tiles/FlipCycleTileSmall.png", UriKind.Relative),
BackgroundImage = new Uri(“Assets/Tiles/FlipCycleTileMedium.png”, UriKind.Relative),
WideBackgroundImage = new Uri(“Assets/Tiles/FlipCycleTileLarge.png”, UriKind.Relative),
Title = “Flip tile”,
BackTitle = “Back flip tile”,
BackContent = “This is a flip tile”,
WideBackContent = “This is a flip tile with wide content”,
Count = 5
};
ShellTile.Create(new Uri(“/MainPage.xaml?id=1”, UriKind.Relative), data, true);
}
When the application is opened using this Tile, you’ll be able to understand the context and display the proper information using the OnNavigatedTo method and the NavigationContext class we used earlier in this series.
Note: To avoid inappropriate usage of secondary Tiles, every time you create a new Tile the application will be closed to immediately display it to the user.
Deleting a secondary Tile requires working with the ShellTile class again. It exposes a collection called ActiveTiles, which contains all the Tiles that belong to the application, including the main one. It’s sufficient to get a reference to the Tile we want to delete (using the deep link as an identifier) and call the Delete() method on it.
private void OnDeleteTileClicked(object sender, RoutedEventArgs e)
{
Uri deepLink = new Uri(“/MainPage.xaml?id=1”, UriKind.Relative);
ShellTile tile = ShellTile.ActiveTiles.FirstOrDefault(x => x.NavigationUri == deepLink);
if (tile != null)
{
tile.Delete();
}
}
The previous sample deletes the Tile identified by the deep link /MainPage.xaml?id=1. Unlike when the Tile is created, the application won’t be closed.
Tip: Remember to always check that a Tile exists before removing it. Like every other Tile, in fact, users can also delete one on the main page by tapping and holding Tile and then tapping the Unpin icon.
Tiles can also be updated. Updates can be performed not only by the main application but also in the background by a background agent.
The approach is similar to the one we’ve seen for the delete operation. First we have to retrieve a reference to the Tile we want to update, and then we call the Update() method, passing the Tile object as a parameter. The following sample shows how to update a Tile that uses the flip template:
private void OnUpdateMainTileClicked(object sender, RoutedEventArgs e)
{
FlipTileData data = new FlipTileData
{
Title = “Updated Flip tile”,
BackTitle = “Updated Back flip tile”,
BackContent = “This is an updated flip tile”,
WideBackContent = “This is an updated flip tile with wide content”,
Count = 5
};
Uri deepLink = new Uri(“/MainPage.xaml?id=1”, UriKind.Relative);
ShellTile tile = ShellTile.ActiveTiles.FirstOrDefault(x => x.NavigationUri == deepLink);
if (tile != null)
{
tile.Update(data);
}
}
The Update() method can also be used to update the application’s main Tile. It’s always stored as the first element of the ActiveTiles collection, so it’s enough to call the Update() method on it as in the following sample:
private void OnUpdateMainTileClicked(object sender, RoutedEventArgs e)
{
FlipTileData data = new FlipTileData
{
Title = “Updated Flip tile”,
BackTitle = “Updated Back flip tile”,
BackContent = “This is an updated flip tile”,
WideBackContent = “This is an updated flip tile with wide content”,
Count = 5
};
ShellTile.ActiveTiles.FirstOrDefault().Update(data);
}
The previous code will always work, even if the main Tile is not pinned to the Start screen. If the user decides to pin it, the Tile will already be updated with the latest notification.
Tip: You can invite users to pin the main Tile on the Start screen, but you can’t force it by code.
Interacting With the Lock Screen
Windows Phone 8 has introduced a new way for applications to interact with users, thanks to the lock screen support. There are two ways to interact with it:
Display notifications in the same way the Messaging and Mail apps display the number of unread messages.
Change the lock screen image; specifically, the application can become a lock screen provider and occasionally update the image using a background agent.
Let’s see in detail how to support both scenarios.
Notifications
In the Settings page, users can choose up to five applications that are able to display counter notifications, and only one application that is able to display text notifications.
To support both scenarios in our application, we need to manually add a new declaration in the manifest file (remember to use the View code option in the context menu since it’s not supported by the visual editor):
The first extension is used to support counter notifications, while the second one is used for text notifications. You can declare just one of them or both, according to your requirements.
If you want to support counter notifications, there’s another modification to apply to the manifest file: inside the Tokens section, you’ll find the tags that define the main Tile’s basic properties. One of them is called DeviceLockImageURI, which you need to set with the path of the image that will be used as an icon for the notifications.
It can contain only transparent or white pixels. No other colors are supported.
Once your application is set, you don’t have to do anything special to display lock screen notifications. In fact, they are based on Tile notifications, so you’ll be able to update both the Tile and the lock screen with just one notification.
If your application supports counter notifications, you need to send a Tile notification with the number stored in the Count property.
If your application supports text notifications, you need to send a Tile notification with the text stored in the WideBackContent property for the flip template or the WideContent1 property for an iconic template. The cycle template doesn’t support text notifications.
Lock Screen Image
The starting point for supporting lock screen images is, again, the manifest file. The following sample is the declaration that should be added in the Extensions section:
Starting now, your application will be listed as a wallpaper provider in the Settings page. If the user chooses your application as a provider, you’ll be able to update the lock screen image, both from the foreground app and using a background agent.
The APIs allow you to check whether the application has already been set as a provider, or you can ask the user. If the application is set as a provider, you will be able to effectively change the wallpaper; otherwise you’ll get an exception.
Two classes are part of the Windows.Phone.System.UserProfile namespace: LockScreenManager can be used to detect the current provider status, and LockScreen can effectively perform operations on the lock screen.
private async void OnSetWallpaperClicked(object sender, RoutedEventArgs e)
{
Uri wallpaper = new Uri(“ms-appx:///Assets/Wallpapers/Wallpaper1.jpg”, UriKind.RelativeOrAbsolute);
bool isProvider = LockScreenManager.IsProvidedByCurrentApplication;
if (isProvider)
{
LockScreen.SetImageUri(wallpaper);
}
else
{
LockScreenRequestResult lockScreenRequestResult = await LockScreenManager.RequestAccessAsync();
if (lockScreenRequestResult == LockScreenRequestResult.Granted)
{
LockScreen.SetImageUri(wallpaper);
}
}
}
The first step is to check if the current application is set as a provider by using the IsProvidedByCurrentApplication property of the LockScreenManager class. Otherwise, we ask for the user’s permission by calling the RequestAccessAsync() method. In return, we receive the user’s choice, which can be positive (LockScreenRequestResult.Granted) or negative (LockScreenRequestResult.Denied).
In both cases, only if the application has been set as provider can we effectively change the lock screen image using the SetImageUri() method, which requires the picture’s path as a parameter. The picture can be either part of the project (as in the previous sample where we use the ms-appx:/// prefix) or stored in the local storage (in this case, we have to use the ms-appdata:///Local/ prefix). Remote images are not directly supported; they must be downloaded before using them as a lock screen background.
The previous code can also be used for the background agent. The difference is that you’ll be able to check whether the application is set as a provider and, eventually, change the image. You won’t be able to ask to the user for permission to use your app as a provider since background agents can’t interact with the UI.
Conclusion
In this tutorial, we have seen that live apps are one of the core concepts in Windows Phone development and, to properly create a quality experience, many factors are involved, like notifications, agents, and Tiles.
The following list details what we’ve learned:
Push notifications are the best way to notify users of something even if the application is in the background. An app can either send toast notifications or update Tiles. We’ve seen how to create the required architecture for reach, both on the client side and the server side.
Push notifications offer the best approach for optimizing battery life and performance, but they support limited scenarios and require a server application to send them. For this reason, Windows Phone has introduced background agents, which we can periodically execute to send notifications or carry out general purpose operations, even when the application is not in the foreground.
Windows Phone offers a special background agent type called audio background agent that is used in audio playback scenarios. Applications are able to play audio even when they are not running, like the native Music + Videos Hub.
Alarms and reminders are a simple way to show reminders to users when the application is not running.
Live Tiles are one of the distinctive features of the platform. We’ve learned how to customize them by choosing between different templates and sizes.
We’ve seen another new feature introduced in Windows Phone 8, lock screen support: applications are now able to interact with the lock screen by displaying notifications and changing the wallpaper.
This tutorial represents a chapter from Windows Phone 8 Succinctly, a free eBook from the team at Syncfusion.