Quantcast
Channel: Envato Tuts+ Code - Mobile Development
Viewing all 1836 articles
Browse latest View live

An Introduction to Volley

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-23800

Volley is a networking library developed by Google and introduced during Google I/O 2013. It was developed because of the absence, in the Android SDK, of a networking class capable of working without interfering with the user experience.

Until the release of Volley, the canonical Java class java.net.HttpURLConnection and the Apache org.apache.http.client were the only tools available to Android programmers to develop a RESTful system between a client and a remote backend.

Putting aside for a moment the fact that these two classes aren't exempt from bugs, it should be noted how everything that went beyond a simple HTTP transaction had to be written ex novoIf you wanted to cache images or prioritize requests, you had to develop it from scratch.

Fortunately, now there's Volley, created and tailored to fulfill these needs.

1. Why Volley?

Avoid HttpUrlConnection and HttpClient

On lower API levels (mostly on Gingerbread and Froyo), HttpUrlConnection and HttpClient are far from being perfect. There are some knownissues and bugs that were never fixed. Moreover, HttpClient was deprecated in the last API update (API 22), which means that it will no longer be maintained and may be removed in a future release.

These are sufficient reasons for deciding to switch to a more reliable way of handling your network requests.

And Avoid AsyncTask Too

Since the introduction of Honeycomb (API 11), it's been mandatory to perform network operations on a separate thread, different from the main thread. This substantial change led the way to massive use of the AsyncTask<Params, Progress, Result> specification.

With AsyncTask, you first define some preparatory actions, such as the definition of the context, in onPreExecute. You then perform your asynchronous tasks using the doInBackground method. Finally, you handle results in onPostExecute. It's pretty straightforward, way easier than the implementation of a service, and comes with a ton of examples and documentation.

The main problem, however, is the serialization of the calls. Using the AsyncTask class, you can't decide which request goes first and which one has to wait. Everything happens FIFO, first in, first out.

The problems arise, for example, when you have to load a list of items that have attached a thumbnail. When the user scrolls down and expects new results, you can't tell your activity to first load the JSON of the next page and only then the images of the previous one. This can become a serious user experience problem in applications such as Facebook or Twitter, where the list of new items is more important than the thumbnail associated with it.

Volley aims to solve this problem by including a powerful cancellation API. You no longer need to check in onPostExecute whether the activity was destroyed while performing the call. This helps avoiding an unwanted NullPointerException.

It's Much Faster

Some time ago, the Google+ team did a series of performance tests on each of the different methods you can use to make network requests on Android. Volley got a score up to ten times better than the other alternatives when used in RESTful applications.

It Caches Everything

Volley automatically caches requests and this is something truly life-saving. Let’s return for a moment to the example I gave earlier. You have a list of items—a JSON array let’s say—and each item has a description and a thumbnail associated with it. Now think about what happens if the user rotates the screen: the activity is destroyed, the list is downloaded again, and so are the images. Long story short, a significant waste of resources and a poor user experience.

Volley proves to be extremely useful for overcoming this issue. It remembers the previous calls it did and handles the activity destruction and reconstruction. It caches everything without you having to worry about it.

Small Metadata Operations

Volley is perfectfor small calls, such as JSON objects, portions of lists, details of a selected item, and so on. It has been devised for RESTful applications and in this particular case it gives its very best.

It is not so good, however, when employed for streaming operations and large downloads. Contrary to common belief, Volley's name doesn't come from the sport dictionary. It’s rather intended as repeated bursts of calls, grouped together. It's somehow intuitive why this library doesn't come in handy when, instead of a volley of arrows, you want to fire a cannon ball.

2. Under the Hood

Volley works on three different levels with each level operating on its own thread.

Volley under the hood

Main Thread

On the main thread, consistently with what you already do in the AsyncTask specification, you are only allowed to fire the request and handle its response. Nothing more, nothing less.

The main consequence is that you can actually ignore everything that was going on in the doInBackground method. Volley automatically manages the HTTP transactions and the catching network errors that you needed to care about before.

Cache and Network Threads

When you add a request to the queue, several things happens under the hood. First, Volley checks if the request can be serviced from cache. If it can, the cached response is read, parsed, and delivered. Otherwise it is passed to the network thread.

On the network thread, a round-robin with a series of threads is constantly working. The first available network thread dequeues the request, makes the HTTP request, parses the response, and writes it to cache. To finish, it dispatches the parsed response back to the main thread where your listeners are waiting to handle the result.

3. Getting Started

Step 1: Importing Volley

Volley isn't so handy to set up. It looks as if there's no official Maven repository available and this is quite bewildering. You have to rely on the official source code. You can import Volley one of several ways.

First things first, download the Volley source from its repository. If you feel confident doing this, this Git command can do all the work for you:

Until some weeks ago, you could wrap everything up using the ant command line (android update project -p . and then ant jar) and importing your JAR library in your Android Studio project with a simple compile files('libs/volley.jar').

Recently, though, Google updated Volley to the Android Studio build style, making it harder to create a standalone JAR. You can still do it, but only with older versions of the library. I personally discourage you to use this option, even though it may seem the quickest.

You should set up Volley the classic way, that is, by importing the source as a module. In Android Studio, with your project opened, select File > New Module, and choose Import Existing Project. Select the directory where you've just downloaded the source code and confirm. A folder named Volley will show up in your project structure. Android Studio automatically updates your settings.gradle file to include the Volley module so you just need to add to your dependencies compile project(':volley') and you’re done.

There is a third way. You can add to the dependency section of the build.gradle file this line:

It’s a mirror copy of the official Google repository, regularly synced and updated. It's probably the simplest and fastest way to get started. However, be aware, it’s an unofficial Maven repository, no guarantees and not backed by Google.

In my opinion, it's still better to invest a few more minutes importing the official source code. This way, you can easily jump to the original definitions and implementations so that, in case of doubt, you can always rely on the official Volley source—and even change it if you need to.

Step 2: Using Volley

Volley mostly works with just two classes, RequestQueue and Request. You first create a RequestQueue, which manages worker threads and delivers the parsed results back to the main thread. You then pass it one or more Request objects.

The Request constructor always takes as parameters the method type (GET, POST, etc.), the URL of the resource, and event listeners. Then, depending on the type of request, it may ask for some more variables.

In the following example, I create a RequestQueue object by invoking one of Volley's convenience methods, Volley.newRequestQueue. This sets up a RequestQueue object, using default values defined by Volley.

As you can see, it’s incredibly straightforward. You create the request and add it to the request queue. And you’re done.

Note that the listener syntax is similar to AsyncTask.onPostExecute, it simply becomes onResponse. This isn't a coincidence. The developers that worked on Volley purposefully made the library's API so similar to the AsyncTask methods. This makes the transition from using AsyncTask to Volley that much easier.

If you have to fire multiple requests in several activities, you should avoid using the above approach, Volley.newRequestQueue.add. It's much better to instantiate one shared request queue and use it across your project:

We'll see specifically to develop something like this in the next tutorial of this series.

4. Put Your Hands in the Dough

Handling Standard Requests

Volley comes in handy for implementing three very common request types:

  • StringRequest
  • ImageRequest
  • JsonRequest

Each of these classes extend the Result class that we used earlier. We already looked at the StringRequest in the previous example. Let’s see instead how a JsonRequest works.

Beautiful. Isn’t it? As you can see, the type of result is already set to JSONObject. You can ask for a JSONArray too if you want, using a JsonArrayRequest instead of a JsonObjectRequest.

As before, the first parameter of the constructor is the HTTP method to use. You then provide the URL to fetch the JSON from. The third variable in the example above is null. This is fine as it indicates that no parameters will be posted along with the request. Finally, you have the listener to receive the JSON response and an error listener. You can pass in null if you want to ignore errors.

Fetching images require a bit more work. There are three possible methods for requesting an image. ImageRequestis the standard one. It displays the picture you requested in a common ImageView, retrieving it via a provided URL. All the decoding and resizing operations you may want Volley to perform happen on a worker thread. The second option is the ImageLoader class, which you can think of as an orchestrator of a large number of ImageRequests, for example, to populate a ListView with images. The third option is NetworkImageView, which is a sort of XML substitute for the ImageView layout item.

Let’s look at an example.

The first parameter is the URL of the picture and the second one is the listener for the result. The third and fourth parameters are integers, maxWidth and maxHeight. You can set them to 0 to ignore these parameters. After that, ImageRequestasks you for the ScaleType used to calculate the needed image size and for the format to decode the bitmap to. I suggest always using Bitmap.Config.ARGB_8888. Finally, we pass in an error listener.

Note that Volley automatically sets the priority of this request to LOW.

Making a POST Request

Switching from a GET request to a POST request is simple. You need to change the Request.Method in the constructor of the request and override the getParams method, returning a proper Map<String, String> containing the parameters of the request.

Canceling a Request

If you want to cancel all your requests, add the following code snippet to the onStop method:

This way, you don't need to worry about the possibility that the user has already destroyed the activity when onResponse is called. A NullPointerException would be thrown in such a case.

POST and PUT requests, however, should continue, even after the user changes activities. We can accomplish this by using tags. When constructing a GET request, add a tag to it.

To cancel every pending GET request, we simply add the following line of code:

This way, you only cancel the GET requests, leaving other requests untouched. Note that you now have to manually handle the case in which the activity is destroyed prematurely.

Managing Cookies and Request Priority

Volley doesn't provide a method for setting the cookies of a request, nor its priority. It probably will in the future, since it's a serious omission. For the time being, however, you have to extend the Request class.

For managing cookies, you can play with the headers of the request, overriding the getHeaders method:

With this implementation, you can directly provide the list of cookies to the request using setCookies.

For the priority, you also need to extend the Request class, overriding the getPriority method. This is what the implementation could look like:

Then, on the main thread, invoke this line of code to set the priority of the request:

You can choose from one of four possible priority states as shown below:

Conclusion

In this article, we looked at how the Volley networking library works. We first saw why and when it's better to use Volley instead of another solution already included in the Android SDK. We then dove deep into the library's details, looking at its workflow and its supported request types. Finally, we got our hands dirty by creating simple requests and implementing custom ones for handling cookies and prioritization.

In the next part of this series about Volley, we'll create a simple application that leverages Volley. I'll show you how to make a weather application for Mars, using weather data that's collected on Mars by the Curiosity rover.

2015-05-13T17:45:34.000Z2015-05-13T17:45:34.000ZGianluca Segato

An Introduction to SceneKit: User Interaction, Animations & Physics

$
0
0
Final product image
What You'll Be Creating

This is the second part of our introductory series on SceneKit. In this tutorial, I assume you are familiar with the concepts explained in the first part, including setting up a scene with lights, shadows, cameras, nodes, and materials.

In this tutorial, I am going to teach you about some of the more complicated—but also more useful—features of SceneKit, such as animation, user interaction, particle systems, and physics. By implementing these features, you can create interactive and dynamic 3D content rather than static objects like you did in the previous tutorial.

1. Setting Up the Scene

Create a new Xcode project based on the iOS > Application > Single View Application template.

iOS App Template

Name the project, set Language to Swift, and Devices to Universal.

App Information

Open ViewController.swift and import the SceneKit framework.

Next, declare the following properties in the ViewController class.

We set up the scene in the viewDidLoad method as shown below.

The implementation of viewDidLoad should look familiar if you've read the first part of this series. All we do is setting up the scene that we'll use in this tutorial. The only new things include the SCNFloor class and the zFar property.

As the name implies, the SCNFloor class is used to create a floor or ground for the scene. This is much easier compared to creating and rotating an SCNPlane as we did in the previous tutorial.

The zFar property determines how far into the distance a camera can see or how far light from a particular source can reach. Build and run your app. Your scene should look something like this:

Initial scene

2. User Interaction

User interaction is handled in SceneKit by a combination of the UIGestureRecognizer class and hit tests. To detect a tap, for example, you first add a UITapGestureRecognizer to a SCNView, determine the tap's position in the view, and see if it is in contact with or hits any of the nodes.

To better understand how this works, we'll use an example. Whenever a node is tapped, we remove it from the scene. Add the following code snippet to the viewDidLoad method of the ViewController class:

Next, add the following method to the ViewController class:

In this method, you first get the location of the tap as a CGPoint. Next, you use this point to perform a hit test on the sceneView object and store the SCNHitTestResult objects in an array called hitResults. The options parameter of this method can contain a dictionary of keys and values, which you can read about in Apple's documentation. We then check to see if the hit test returned at least one result and, if it did, we remove the first element in the array from its parent node.

If the hit test returned multiple results, the objects are sorted by their z position, that is, the order in which they appear from the current camera's point of view. For example, in the current scene, if you tap on either of the two spheres or the button, the node you tapped will form the first item in the returned array. Because the ground appears directly behind these objects from the camera's point of view, however, the ground node will be another item in the array of results, the second in this case. This happens because a tap in that same location would hit the ground node if the spheres and button weren't there.

Build and run your app, and tap the objects in the scene. They should disappear as you tap each one.

Scene with some deleted nodes

Now that we can determine when a node is tapped, we can start adding animations to the mix.

3. Animation

There are two classes which can be used to perform animations in SceneKit:

  • SCNAction
  • SCNTransaction

SCNAction objects are very useful for simple and reusable animations, such as movement, rotation and scale. You can combine any number of actions together into a custom action object.

The SCNTransaction class can perform the same animations, but it is more versatile in some ways, such as animating materials. This added versatility, however, comes at the cost of SCNTransaction animations only having the same reusability as a function and the setup being done via class methods.

For your first animation, I am going to show you code using both the SCNAction and SCNTransaction classes. The example will move your button down and turn it white when it's tapped. Update the implementation of the sceneTapped(_:) method as shown below.

In the sceneTapped(_:) method, we obtain a reference to the node the user has tapped and check whether this is the button in the scene. If it is, we animate its material from red to white, using the SCNTransaction class, and move it along the y axis in a negative direction using an SCNAction instance. The duration of the animation is set to 0.5 seconds.

Build and run your app again, and tap on the button. It should move down and change its color to white as shown in the below screenshot.

Animated button

4. Physics

Setting up realistic physics simulations is easy with the SceneKit framework. The functionality that SceneKit's physics simulations offer, is extensive, ranging from basic velocities, accelerations and forces, to gravitational and electrical fields, and even collision detection.

What you are going to do in the current scene is, apply a gravitational field to one of the spheres so that the second sphere is pulled towards the first sphere as a result of the gravity. This force of gravity will become active when the button is pressed.

The setup for this simulation is very simple. Use an SCNPhysicsBody object for every node that you want to be affected by the physics simulation and an SCNPhysicsField object for every node that you want to be the source of a field. Update the viewDidLoad method as shown below.

We start by creating an SCNPhysicsShape instance that specifies the actual shape of the object that takes part in the physics simulation. For the basic shapes you are using in this scene, the geometry objects are perfectly fine to use. For complicated 3D models, however, it is better to combine multiple primitive shapes together to create an approximate shape of your object for physics simulation.

From this shape, you then create an SCNPhysicsBody instance and add it to the ground of the scene. This is necessary, because every SceneKit scene has by default an existing gravity field that pulls every object downwards. The Kinematic type you give to this SCNPhysicsBody means that the object will take part in collisions, but is unaffected by forces (and won't fall due to gravity).

Next, you create the gravitational field and assign this to the first sphere node. Following the same process as for the ground, you then create a physics body for each of the two spheres. You specify the second sphere as a Dynamic physics body though, because you want it to be affected and moved by the gravitational field you created.

Lastly, you need to set the strength of this field to activate it when the button is tapped. Add the following line to the sceneTapped(_:) method:

Build and run your app, tap the button, and watch as the second sphere slowly accelerates towards the first one. Note that it may take a few seconds before the second sphere starts moving.

First sphere moves towards the second

There's just one thing left to do, however, make the spheres explode when they collide.

5. Collision Detection and Particle Systems

To create the effect of an explosion we're going to leverage the SCNParticleSystem class. A particle system can be created by an external 3D program, source code, or, as I am about to show you, Xcode's particle system editor. Create a new file by pressing Command+N and choose SceneKit Particle System from the iOS > Resource section.

Particle system template

Set the particle system template to Reactor. Click Next, name the file Explosion, and save it in your project folder.

Particle system type

In the Project Navigator, you will now see two new files, Explosion.scnp and spark.png. The spark.png image is a resource used by the particle system, automatically added to your project. If you open Explosion.scnp, you will see it being animated and rendered in real time in Xcode. The particle system editor is a very powerful tool in Xcode and allows you to customize a particle system without having to do it programmatically. 

Xcodes particle system editor

With the particle system open, go to the Attributes Inspector on the right and change the following attributes in the Emitter section:

  • Birth rate to 300
  • Direction mode to Random

Change the following attributes in the Simulation section:

  • Life span to 3
  • Speed factor to 2

And finally, change the following attributes in the Life cycle section:

  • Emission dur. to 1
  • Looping to Plays once
Particle system attributes 1
Particle system attributes 2

Your particle system should now shoot out in all directions and look similar to the following screenshot:

Finished particle system

Open ViewController.swift and make your ViewController class conform to the SCNPhysicsContactDelegate protocol. Adopting this protocol is necessary to detect a collision between two nodes.

Next, assign the current ViewController instance as the contactDelegate of your physicsWorld object in the viewDidLoad method.

Finally, implement the physicsWorld(_:didUpdateContact:) method in the ViewController class:

We first check to see whether the two nodes involved in the collision are the two spheres. If that's the case, then we load the particle system from the file we created a moment ago and add it to a new node. Finally, we remove both spheres involved in the collision from the scene.

Build and run your app again, and tap the button. When the spheres make contact, they should both disappear and your particle system should appear and animate.

Explosion when the two spheres collide

Conclusion

In this tutorial, I showed you how to implement user interaction, animation, physics simulation, and particle systems using the SceneKit framework. The techniques you've learned in this series can be applied to any project with any number of animations, physics simulations, etc.

You should now be comfortable creating a simple scene and adding dynamic elements to it, such as animation and particles systems. The concepts you have learned in this series are applicable to the smallest scene with a single object all the way up to a large scale game.

2015-05-15T16:45:40.000Z2015-05-15T16:45:40.000ZDavis Allie

An Introduction to SceneKit: User Interaction, Animations & Physics

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-23877
Final product image
What You'll Be Creating

This is the second part of our introductory series on SceneKit. In this tutorial, I assume you are familiar with the concepts explained in the first part, including setting up a scene with lights, shadows, cameras, nodes, and materials.

In this tutorial, I am going to teach you about some of the more complicated—but also more useful—features of SceneKit, such as animation, user interaction, particle systems, and physics. By implementing these features, you can create interactive and dynamic 3D content rather than static objects like you did in the previous tutorial.

1. Setting Up the Scene

Create a new Xcode project based on the iOS > Application > Single View Application template.

iOS App Template

Name the project, set Language to Swift, and Devices to Universal.

App Information

Open ViewController.swift and import the SceneKit framework.

Next, declare the following properties in the ViewController class.

We set up the scene in the viewDidLoad method as shown below.

The implementation of viewDidLoad should look familiar if you've read the first part of this series. All we do is setting up the scene that we'll use in this tutorial. The only new things include the SCNFloor class and the zFar property.

As the name implies, the SCNFloor class is used to create a floor or ground for the scene. This is much easier compared to creating and rotating an SCNPlane as we did in the previous tutorial.

The zFar property determines how far into the distance a camera can see or how far light from a particular source can reach. Build and run your app. Your scene should look something like this:

Initial scene

2. User Interaction

User interaction is handled in SceneKit by a combination of the UIGestureRecognizer class and hit tests. To detect a tap, for example, you first add a UITapGestureRecognizer to a SCNView, determine the tap's position in the view, and see if it is in contact with or hits any of the nodes.

To better understand how this works, we'll use an example. Whenever a node is tapped, we remove it from the scene. Add the following code snippet to the viewDidLoad method of the ViewController class:

Next, add the following method to the ViewController class:

In this method, you first get the location of the tap as a CGPoint. Next, you use this point to perform a hit test on the sceneView object and store the SCNHitTestResult objects in an array called hitResults. The options parameter of this method can contain a dictionary of keys and values, which you can read about in Apple's documentation. We then check to see if the hit test returned at least one result and, if it did, we remove the first element in the array from its parent node.

If the hit test returned multiple results, the objects are sorted by their z position, that is, the order in which they appear from the current camera's point of view. For example, in the current scene, if you tap on either of the two spheres or the button, the node you tapped will form the first item in the returned array. Because the ground appears directly behind these objects from the camera's point of view, however, the ground node will be another item in the array of results, the second in this case. This happens because a tap in that same location would hit the ground node if the spheres and button weren't there.

Build and run your app, and tap the objects in the scene. They should disappear as you tap each one.

Scene with some deleted nodes

Now that we can determine when a node is tapped, we can start adding animations to the mix.

3. Animation

There are two classes which can be used to perform animations in SceneKit:

  • SCNAction
  • SCNTransaction

SCNAction objects are very useful for simple and reusable animations, such as movement, rotation and scale. You can combine any number of actions together into a custom action object.

The SCNTransaction class can perform the same animations, but it is more versatile in some ways, such as animating materials. This added versatility, however, comes at the cost of SCNTransaction animations only having the same reusability as a function and the setup being done via class methods.

For your first animation, I am going to show you code using both the SCNAction and SCNTransaction classes. The example will move your button down and turn it white when it's tapped. Update the implementation of the sceneTapped(_:) method as shown below.

In the sceneTapped(_:) method, we obtain a reference to the node the user has tapped and check whether this is the button in the scene. If it is, we animate its material from red to white, using the SCNTransaction class, and move it along the y axis in a negative direction using an SCNAction instance. The duration of the animation is set to 0.5 seconds.

Build and run your app again, and tap on the button. It should move down and change its color to white as shown in the below screenshot.

Animated button

4. Physics

Setting up realistic physics simulations is easy with the SceneKit framework. The functionality that SceneKit's physics simulations offer, is extensive, ranging from basic velocities, accelerations and forces, to gravitational and electrical fields, and even collision detection.

What you are going to do in the current scene is, apply a gravitational field to one of the spheres so that the second sphere is pulled towards the first sphere as a result of the gravity. This force of gravity will become active when the button is pressed.

The setup for this simulation is very simple. Use an SCNPhysicsBody object for every node that you want to be affected by the physics simulation and an SCNPhysicsField object for every node that you want to be the source of a field. Update the viewDidLoad method as shown below.

We start by creating an SCNPhysicsShape instance that specifies the actual shape of the object that takes part in the physics simulation. For the basic shapes you are using in this scene, the geometry objects are perfectly fine to use. For complicated 3D models, however, it is better to combine multiple primitive shapes together to create an approximate shape of your object for physics simulation.

From this shape, you then create an SCNPhysicsBody instance and add it to the ground of the scene. This is necessary, because every SceneKit scene has by default an existing gravity field that pulls every object downwards. The Kinematic type you give to this SCNPhysicsBody means that the object will take part in collisions, but is unaffected by forces (and won't fall due to gravity).

Next, you create the gravitational field and assign this to the first sphere node. Following the same process as for the ground, you then create a physics body for each of the two spheres. You specify the second sphere as a Dynamic physics body though, because you want it to be affected and moved by the gravitational field you created.

Lastly, you need to set the strength of this field to activate it when the button is tapped. Add the following line to the sceneTapped(_:) method:

Build and run your app, tap the button, and watch as the second sphere slowly accelerates towards the first one. Note that it may take a few seconds before the second sphere starts moving.

First sphere moves towards the second

There's just one thing left to do, however, make the spheres explode when they collide.

5. Collision Detection and Particle Systems

To create the effect of an explosion we're going to leverage the SCNParticleSystem class. A particle system can be created by an external 3D program, source code, or, as I am about to show you, Xcode's particle system editor. Create a new file by pressing Command+N and choose SceneKit Particle System from the iOS > Resource section.

Particle system template

Set the particle system template to Reactor. Click Next, name the file Explosion, and save it in your project folder.

Particle system type

In the Project Navigator, you will now see two new files, Explosion.scnp and spark.png. The spark.png image is a resource used by the particle system, automatically added to your project. If you open Explosion.scnp, you will see it being animated and rendered in real time in Xcode. The particle system editor is a very powerful tool in Xcode and allows you to customize a particle system without having to do it programmatically. 

Xcodes particle system editor

With the particle system open, go to the Attributes Inspector on the right and change the following attributes in the Emitter section:

  • Birth rate to 300
  • Direction mode to Random

Change the following attributes in the Simulation section:

  • Life span to 3
  • Speed factor to 2

And finally, change the following attributes in the Life cycle section:

  • Emission dur. to 1
  • Looping to Plays once
Particle system attributes 1
Particle system attributes 2

Your particle system should now shoot out in all directions and look similar to the following screenshot:

Finished particle system

Open ViewController.swift and make your ViewController class conform to the SCNPhysicsContactDelegate protocol. Adopting this protocol is necessary to detect a collision between two nodes.

Next, assign the current ViewController instance as the contactDelegate of your physicsWorld object in the viewDidLoad method.

Finally, implement the physicsWorld(_:didUpdateContact:) method in the ViewController class:

We first check to see whether the two nodes involved in the collision are the two spheres. If that's the case, then we load the particle system from the file we created a moment ago and add it to a new node. Finally, we remove both spheres involved in the collision from the scene.

Build and run your app again, and tap the button. When the spheres make contact, they should both disappear and your particle system should appear and animate.

Explosion when the two spheres collide

Conclusion

In this tutorial, I showed you how to implement user interaction, animation, physics simulation, and particle systems using the SceneKit framework. The techniques you've learned in this series can be applied to any project with any number of animations, physics simulations, etc.

You should now be comfortable creating a simple scene and adding dynamic elements to it, such as animation and particles systems. The concepts you have learned in this series are applicable to the smallest scene with a single object all the way up to a large scale game.

2015-05-15T16:45:40.000Z2015-05-15T16:45:40.000ZDavis Allie

Quick Tip: Add Facebook Login to Your Android App

$
0
0

Facebook Login provides a convenient and secure way for people to log in to an app without having to go through a sign-up process first. Using the latest version of Facebook's SDK for Android, it takes only a few minutes to add this feature to your app.

In this quick tip, you will learn how to add a Facebook login button to an Android app and handle the events to log a user in using Facebook Login.

Prerequisites

Before you begin, make sure you have access to the following:

1. Register Your App

All apps that use the Facebook SDK must be registered with Facebook. Log in to the Facebook Developers website and click Create a New App in the top right.

Facebook Developers Website

You are presented with a form that asks for the app's Display Name, Namespace, and Category. Enter the required fields and click Create App ID.

Form for creating a new app ID

In the next screen, you are able to see your Application ID. Make a note of it, because you will be needing it later in this tutorial.

App ID and app secret

Open Settings from the left and click the Add Platform button. From the pop-up, select Android.

Select Platform dialog

In the next form, enter the package name of your app and the name of your Activity. If you haven't created your app or Activity yet, make sure you remember the values you entered.

To fill in the Key Hashes field, open a terminal window and run the keytool command to generate a key hash using the debug keystore located at ~/.android/debug.keystore. This is what the command should look like.

The default password for the debug keystore is android. Enter that password when prompted. The output of the command should be a string of 28 characters. Copy it, go back to your browser, and paste the string into the Key Hashes field as shown below.

Android app details

Make sure Single Sign On is set to Yes and click the Save Changes button. Your app is now registered.

2. Add Facebook SDK to Your Project

The Facebook SDK is available on Maven Central. To use this repository, edit the build.gradle file in your project's app directory and add the following code to it before the list of dependencies:

You can now add the Facebook SDK to your project as a compile dependency. Add the following code to the list of dependencies:

3. Create an Activity

Step 1: Define the Layout

Create a new layout named main_activity.xml in res/layout. This is going to be a very simple layout with only two widgets:

  • LoginButton to allow the user to log in to Facebook
  • TextView to display the result of the latest login attempt

You can place them inside a RelativeLayout. After including attributes for padding and positioning the widgets, the layout's XML will look something like this:

Step 2: Create the Class

Create a new Java class that extends Activity and name it MainActivity.java. Remember that the name of this class and the package that it belongs to should match the values you entered while registering your app with Facebook.

Declare the widgets you defined in the activity's layout as fields of this class.

Declare a CallbackManager as another field. The CallbackManager, as its name suggests, is used to manage the callbacks used in the app.

The SDK needs to be initialized before using any of its methods. You can do so by calling sdkInitialize and passing the application's context to it. Add the following code to the onCreate method of your Activity:

Next, initialize your instance of CallbackManager using the CallbackManager.Factory.create method.

Call setContentView to set the layout defined in the previous step as the layout of this Activity and then use findViewById to initialize the widgets.

It's time to create a callback to handle the results of the login attempts and register it with the CallbackManager. Custom callbacks should implement FacebookCallback. The interface has methods to handle each possible outcome of a login attempt:

  • If the login attempt is successful, onSuccess is called.
  • If the user cancels the login attempt, onCancel is called.
  • If an error occurs, onError is called.

To register the custom callback, use the registerCallback method. The code to create and register the callback should look like this:

You can now add code to these methods to display appropriate messages using the setText method of the TextView.

When the onSuccess method is called, a LoginResult is passed as a parameter. Retrieve the access token it contains using getAccessToken and use its getUserId method to get the user's ID. To get the token in the form of a String, use getToken. Display these values in the TextView by adding the following code to the onSuccess method:

If the user cancel's the login attempt, we display a message saying "Login attempt canceled". Add the following code to the onCancel method:

Similarly, add the following code to the onError method:

Tapping the login button starts off a new Activity, which returns a result. To receive and handle the result, override the onActivityResult method of your Activity and pass its parameters to the onActivityResult method of CallbackManager.

4. Add the Facebook Application ID

The application ID you received when you registered your app should be added as a string in your project's res/values/strings.xml. For this tutorial, call the string facebook_app_id.

5. Edit the Manifest

Define your Activity in the AndroidManifest.xml. If it is the first Activity of your app, you should also add an intent-filter that responds to the action android.intent.action.MAIN.

Add the application ID as meta-data.

Define FacebookActivity as another Activity that belongs to your app. It handles most of the configuration changes itself. You need to mention that using the configChanges attribute.

Finally, you have to request android.permission.INTERNET to be able to connect to Facebook's servers.

6. Build and Run

Your app is now complete. When you build it and deploy it on your Android device, you will see the Facebook login button.

The Log in with Facebook button

Tapping the login button takes you to a Facebook page that asks you to log in and authorize the app.

Authorization screen

After successfully logging in, the TextView will display the user ID and auth token.

Result of a successful login

Conclusion

In this quick tip, you learned how to use the Facebook SDK to add Facebook Login to your Android app. You also learned how to handle the possible outcomes of a login attempt. To learn more about Facebook Login, you can go through the reference for the Facebook SDK for Android.



2015-05-18T16:30:13.000Z2015-05-18T16:30:13.000ZAshraff Hathibelagal

Quick Tip: Add Facebook Login to Your Android App

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-23837

Facebook Login provides a convenient and secure way for people to log in to an app without having to go through a sign-up process first. Using the latest version of Facebook's SDK for Android, it takes only a few minutes to add this feature to your app.

In this quick tip, you will learn how to add a Facebook login button to an Android app and handle the events to log a user in using Facebook Login.

Prerequisites

Before you begin, make sure you have access to the following:

1. Register Your App

All apps that use the Facebook SDK must be registered with Facebook. Log in to the Facebook Developers website and click Create a New App in the top right.

Facebook Developers Website

You are presented with a form that asks for the app's Display Name, Namespace, and Category. Enter the required fields and click Create App ID.

Form for creating a new app ID

In the next screen, you are able to see your Application ID. Make a note of it, because you will be needing it later in this tutorial.

App ID and app secret

Open Settings from the left and click the Add Platform button. From the pop-up, select Android.

Select Platform dialog

In the next form, enter the package name of your app and the name of your Activity. If you haven't created your app or Activity yet, make sure you remember the values you entered.

To fill in the Key Hashes field, open a terminal window and run the keytool command to generate a key hash using the debug keystore located at ~/.android/debug.keystore. This is what the command should look like.

The default password for the debug keystore is android. Enter that password when prompted. The output of the command should be a string of 28 characters. Copy it, go back to your browser, and paste the string into the Key Hashes field as shown below.

Android app details

Make sure Single Sign On is set to Yes and click the Save Changes button. Your app is now registered.

2. Add Facebook SDK to Your Project

The Facebook SDK is available on Maven Central. To use this repository, edit the build.gradle file in your project's app directory and add the following code to it before the list of dependencies:

You can now add the Facebook SDK to your project as a compile dependency. Add the following code to the list of dependencies:

3. Create an Activity

Step 1: Define the Layout

Create a new layout named main_activity.xml in res/layout. This is going to be a very simple layout with only two widgets:

  • LoginButton to allow the user to log in to Facebook
  • TextView to display the result of the latest login attempt

You can place them inside a RelativeLayout. After including attributes for padding and positioning the widgets, the layout's XML will look something like this:

Step 2: Create the Class

Create a new Java class that extends Activity and name it MainActivity.java. Remember that the name of this class and the package that it belongs to should match the values you entered while registering your app with Facebook.

Declare the widgets you defined in the activity's layout as fields of this class.

Declare a CallbackManager as another field. The CallbackManager, as its name suggests, is used to manage the callbacks used in the app.

The SDK needs to be initialized before using any of its methods. You can do so by calling sdkInitialize and passing the application's context to it. Add the following code to the onCreate method of your Activity:

Next, initialize your instance of CallbackManager using the CallbackManager.Factory.create method.

Call setContentView to set the layout defined in the previous step as the layout of this Activity and then use findViewById to initialize the widgets.

It's time to create a callback to handle the results of the login attempts and register it with the CallbackManager. Custom callbacks should implement FacebookCallback. The interface has methods to handle each possible outcome of a login attempt:

  • If the login attempt is successful, onSuccess is called.
  • If the user cancels the login attempt, onCancel is called.
  • If an error occurs, onError is called.

To register the custom callback, use the registerCallback method. The code to create and register the callback should look like this:

You can now add code to these methods to display appropriate messages using the setText method of the TextView.

When the onSuccess method is called, a LoginResult is passed as a parameter. Retrieve the access token it contains using getAccessToken and use its getUserId method to get the user's ID. To get the token in the form of a String, use getToken. Display these values in the TextView by adding the following code to the onSuccess method:

If the user cancel's the login attempt, we display a message saying "Login attempt canceled". Add the following code to the onCancel method:

Similarly, add the following code to the onError method:

Tapping the login button starts off a new Activity, which returns a result. To receive and handle the result, override the onActivityResult method of your Activity and pass its parameters to the onActivityResult method of CallbackManager.

4. Add the Facebook Application ID

The application ID you received when you registered your app should be added as a string in your project's res/values/strings.xml. For this tutorial, call the string facebook_app_id.

5. Edit the Manifest

Define your Activity in the AndroidManifest.xml. If it is the first Activity of your app, you should also add an intent-filter that responds to the action android.intent.action.MAIN.

Add the application ID as meta-data.

Define FacebookActivity as another Activity that belongs to your app. It handles most of the configuration changes itself. You need to mention that using the configChanges attribute.

Finally, you have to request android.permission.INTERNET to be able to connect to Facebook's servers.

6. Build and Run

Your app is now complete. When you build it and deploy it on your Android device, you will see the Facebook login button.

The Log in with Facebook button

Tapping the login button takes you to a Facebook page that asks you to log in and authorize the app.

Authorization screen

After successfully logging in, the TextView will display the user ID and auth token.

Result of a successful login

Conclusion

In this quick tip, you learned how to use the Facebook SDK to add Facebook Login to your Android app. You also learned how to handle the possible outcomes of a login attempt. To learn more about Facebook Login, you can go through the reference for the Facebook SDK for Android.



2015-05-18T16:30:13.000Z2015-05-18T16:30:13.000ZAshraff Hathibelagal

Creating a Weather Application for Mars Using Volley

$
0
0
Final product image
What You'll Be Creating

Introduction

In this tutorial, I will show you a possible use case of what we learnt in the previous article about Volley. We will create a weather application for Mars, using the information collected by the Curiosity rover, which is made available to everyone by NASA through the {MAAS} API.

First, we will set up the project in Android Studio and design the user interface. We will then structure the core of the application using Volley. Since every beautiful application features some images, I will show you how to fetch a random one using Flickr's API. We will download the picture with Volley, mostly because of its great caching system. Finally, we will add some fancy details to give the application a gorgeous look and feel.

1. Project Setup

First, create a new project in Android Studio. Since Volley is backwards compatible, you can choose whatever API level you prefer. I opted for API 21, but you should be fine as long as the API level is 8 (Froyo) or higher.

Step 1: User Interface

Our application has a single, simple activity. You can call it MainActivity.java, as suggested by Android Studio. Open the layout editor and double-click activity_main.xml.

Since we would like to have about 70% of the screen dedicated to the image and the rest to the weather information, we need to use the XML attribute layout_weight. Of course, we can use absolute values too, but it wouldn't be the same. Unfortunately, the Android world features displays that are anything but homogenous, and specifying an absolute value for the height of the image could result in a 90-10 ratio on very small devices and a 70-30, or even a 60-40 relation, on larger devices. The layout_weight attribute is what you need to solve this problem.

Inside the first child, add the ImageView:

In the second RelativeLayout, we add a list of TextView items. Two of them are views in which the average temperature and the atmosphere opacity are shown. The third is an error label.

The layout should now be complete. You can add more details if you want, but a complex and detailed user interface is not within the scope of this tutorial.

Step 2: Theme and Permissions

There are two more things we need to take care of before starting to dig into the core of the application. Change the inherited theme of the application to android:Theme.Material.Light.NoActionBar. This means that we don't need to hide the action bar at run time.

Finally, add the internet permission to the project's manifest.

2. Application Core

Step 1: Import Volley

As we discussed in the previous article, the simplest and most reliable way to use Volley is by importing the library as a new module. Download the source code of the library, import it via File > New > Module, and tell the compiler in the project's build.gradle file to include it in the project.

Step 2: Implement Helper Class

As I already pointed out in the previous article, if you need to fire multiple requests, it is better to use a shared request queue. You should avoid creating a request queue each time you schedule a request by invoking Volley.newRequestQueue, because you don't want to end up with memory leaks and other unwanted problems.

To do that, you first have to create a class using the singleton pattern. The class is referenced using a static, globally visible variable, which then handles the object RequestQueue. This way, you end up with a single RequestQueue for the application. Then, extending the Application class, you have to tell to the operating system to generate this object at application startup, even before the first activity is created.

Since we're in the Android environment, we slightly modify the common singleton structure. The class needs to create a new instance of itself in the Application.onCreate method—not in a generic getInstance method when it is null.

To achieve this, create a new class and name it MarsWeather.java. Next, extend the Android Application class, override theonCreate method, and initialize the RequestQueue object of the static instance.

In the singleton class, we construct the object of the class using a public and synchronized function getInstance. Inside this method, we return the mInstance variable. The onCreate method is invoked when the application is started so the mInstance variable will already be set the first time the getInstance method is invoked.

Next, tell in the AndroidManifest.xml file that you want MarsWeather to be loaded at application startup. In the <application> tag, add the attribute name as follows:

That's it. An instance of the Application class is created, even before MainActivity is created. Along with all the other standard operations, onCreate generates an instance of the RequestQueue.

We need to implement three other methods to finish up the helper class. The first method replaces Volley.newRequestQueue, which I'll name getRequestQueue. We also need a method to add a request to the queue, add, and a method that's responsible for canceling requests, cancel. The following code block shows what the implementation looks like.

TAG is a generic token you use to identify the request. In this specific case, it can be whatever you want:

Step 3: Implement the Custom Request

As you already know, Volley provides three standard request types: StringRequest, ImageRequest, and JsonRequest. Our application is going to use the latter to fetch weather data and retrieve the list of random images.

By default, Volley sets the priority of the request to NORMAL. Usually that would be fine, but in our application we have two requests that are quite different and we therefore need to have a different priority in the queue. Fetching the weather data needs to have a higher priority than fetching the URL of the random image.

For that reason, we need to customize theJsonRequest class. Create a new class named CustomJsonRequest.java, and make sure it extends JsonObjectRequest. Next, override the getPriority method as shown below.

Step 4: Fetching Data

We’re finally arrived to the most interesting part of this tutorial in which we write the implementation to fetch the weather data. The endpoint of the request is:

The APIs are browsable so open the link to inspect the resulting JSON. The JSON contains a simple object, result, that includes a series of strings, ranging from temperatures to wind direction and sunset time.

Start by declaring the following variables in the MainActivity class:

You can call MarsWeather.getInstance outside of onCreate. Since the class will already be initialized, you don’t need to wait for the onStart method to call it. Of course, you have to set the references of the user interface views in the onCreate method.

After doing that, it's time to implement the loadWeatherData method. We create a custom Volley request and set the priority to HIGH. We then invoke the helper's add method to add it to the request queue. The important thing to note is the result listener, since it's going to affect the user interface.

As you can see, the method takes the minimum and maximum temperatures, computes the average temperature, and update the user interface. I also implemented a simple method to handles errors.

We now only need to call loadWeatherData in onCreate and you're done. The app is now ready to show the weather of Mars.

3. Fetching Image Data

Now that you have the core of the app ready and working, we can focus on making the app visually more appealing. We are going to do this by fetching a random Mars image and displaying it to the user.

Step 1: Fetch a Random Picture

You will need a Flickr API key to fetch a random list of contextualized images. The image endpoint is the following:

As you can see, the request is fairly simple. You are telling Flickr to give you results formatted as JSON (format=json), but we don't specify a JSON callback (nojsoncallback=1). You are searching an image (method=flickr.photos.search) and the tags you are interested in are related to Mars (tags=mars,planet,rover). Take a look at the documentation for more information about the format of the request URL.

Start by declaring the following variables:

Next, implement the searchRandomImage method:

As you can see, Flickr sends back a JSONArray containing the images. The method I wrote to fetch a random image generates a random number between zero and to the size of the array. It takes the item corresponding to that index from the array of results and constructs the URL for the image following these guidelines.

Like before, we need a method for error handling:

Finally, call searchRandomImage in the onCreate method and don't forget to catch any exceptions.

Step 2: Show the Picture

Now that we have an URL to load, we can show the picture. You already learned how to do this in the previous article.

In the onResponse method we wrote in the previous step, we are finally able to handle the result.

Step 3: Showing a New Image Every Day

Maybe you already noticed that we are bypassing Volley's caching system by fetching a random image every time the application is launched. We need to find a way to show the same image on a particular day.

The simplest way to achieve this is by using Android’s SharedPreferences. Start by declaring the variables we'll need for this.

Next, in the onCreate method, before the call to searchRandomImage, initialize mSharedPref.

The idea is to store the current day every time we fetch a new random picture. Of course, we store the URL of the image alongside the day. When the application launches, we check whether we already have an entry in the SharedPreferences for the current day. If we have a match, we use the stored URL. Otherwise we fetch a random image and store its URL in the SharedPreferences.

In searchRandomImage, after the definition of imageUrl, add the following lines of code:

TheonCreate method, after the definition on mSharedPref, now becomes:

That's it. Your application is ready. Feel free to download the source files of this tutorial on GitHub to see the completed project. Take a look at the project it if you're running into issues.

Bonus Tip: Improving the User Interface

Step 1: Font

The font used in a user interface often determines the look and feel of an application. Let's start by changing the default Roboto font with a more appealing font, such as Lato light.

Create a new folder named fonts in the assets folder. If you can’t find the assets folder, you have to create it at the same level as the java folder. The folder structure should look something like app\src\main\assets\fonts.

Copy the file Lato-light.ttf in the fonts folder. In the onCreate method, you need to override the default typeface of the views in which you'd like to use the new font.

Step 2: Transparent Status Bar

Following the guidelines for Android Material Design, we can make the status bar transparent. This way, the background will be partially visible through the status bar.

You can achieve this by making a small change in the application's theme. Edit the project's v21\style.xml file like this:

Make sure that the AndroidManifest.xml is already set to use the theme:

Conclusion

We made a long journey. In the first article, we started talking about Volley and its applications. In this tutorial, we looked at a practical way to implement the concepts we learned by building a weather application for Mars. You should now have a good understanding of the Volley library, how it works, and what you can use it for.

2015-05-20T16:55:41.000Z2015-05-20T16:55:41.000ZGianluca Segato

Creating a Weather Application for Mars Using Volley

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-23812
Final product image
What You'll Be Creating

Introduction

In this tutorial, I will show you a possible use case of what we learnt in the previous article about Volley. We will create a weather application for Mars, using the information collected by the Curiosity rover, which is made available to everyone by NASA through the {MAAS} API.

First, we will set up the project in Android Studio and design the user interface. We will then structure the core of the application using Volley. Since every beautiful application features some images, I will show you how to fetch a random one using Flickr's API. We will download the picture with Volley, mostly because of its great caching system. Finally, we will add some fancy details to give the application a gorgeous look and feel.

1. Project Setup

First, create a new project in Android Studio. Since Volley is backwards compatible, you can choose whatever API level you prefer. I opted for API 21, but you should be fine as long as the API level is 8 (Froyo) or higher.

Step 1: User Interface

Our application has a single, simple activity. You can call it MainActivity.java, as suggested by Android Studio. Open the layout editor and double-click activity_main.xml.

Since we would like to have about 70% of the screen dedicated to the image and the rest to the weather information, we need to use the XML attribute layout_weight. Of course, we can use absolute values too, but it wouldn't be the same. Unfortunately, the Android world features displays that are anything but homogenous, and specifying an absolute value for the height of the image could result in a 90-10 ratio on very small devices and a 70-30, or even a 60-40 relation, on larger devices. The layout_weight attribute is what you need to solve this problem.

Inside the first child, add the ImageView:

In the second RelativeLayout, we add a list of TextView items. Two of them are views in which the average temperature and the atmosphere opacity are shown. The third is an error label.

The layout should now be complete. You can add more details if you want, but a complex and detailed user interface is not within the scope of this tutorial.

Step 2: Theme and Permissions

There are two more things we need to take care of before starting to dig into the core of the application. Change the inherited theme of the application to android:Theme.Material.Light.NoActionBar. This means that we don't need to hide the action bar at run time.

Finally, add the internet permission to the project's manifest.

2. Application Core

Step 1: Import Volley

As we discussed in the previous article, the simplest and most reliable way to use Volley is by importing the library as a new module. Download the source code of the library, import it via File > New > Module, and tell the compiler in the project's build.gradle file to include it in the project.

Step 2: Implement Helper Class

As I already pointed out in the previous article, if you need to fire multiple requests, it is better to use a shared request queue. You should avoid creating a request queue each time you schedule a request by invoking Volley.newRequestQueue, because you don't want to end up with memory leaks and other unwanted problems.

To do that, you first have to create a class using the singleton pattern. The class is referenced using a static, globally visible variable, which then handles the object RequestQueue. This way, you end up with a single RequestQueue for the application. Then, extending the Application class, you have to tell to the operating system to generate this object at application startup, even before the first activity is created.

Since we're in the Android environment, we slightly modify the common singleton structure. The class needs to create a new instance of itself in the Application.onCreate method—not in a generic getInstance method when it is null.

To achieve this, create a new class and name it MarsWeather.java. Next, extend the Android Application class, override theonCreate method, and initialize the RequestQueue object of the static instance.

In the singleton class, we construct the object of the class using a public and synchronized function getInstance. Inside this method, we return the mInstance variable. The onCreate method is invoked when the application is started so the mInstance variable will already be set the first time the getInstance method is invoked.

Next, tell in the AndroidManifest.xml file that you want MarsWeather to be loaded at application startup. In the <application> tag, add the attribute name as follows:

That's it. An instance of the Application class is created, even before MainActivity is created. Along with all the other standard operations, onCreate generates an instance of the RequestQueue.

We need to implement three other methods to finish up the helper class. The first method replaces Volley.newRequestQueue, which I'll name getRequestQueue. We also need a method to add a request to the queue, add, and a method that's responsible for canceling requests, cancel. The following code block shows what the implementation looks like.

TAG is a generic token you use to identify the request. In this specific case, it can be whatever you want:

Step 3: Implement the Custom Request

As you already know, Volley provides three standard request types: StringRequest, ImageRequest, and JsonRequest. Our application is going to use the latter to fetch weather data and retrieve the list of random images.

By default, Volley sets the priority of the request to NORMAL. Usually that would be fine, but in our application we have two requests that are quite different and we therefore need to have a different priority in the queue. Fetching the weather data needs to have a higher priority than fetching the URL of the random image.

For that reason, we need to customize theJsonRequest class. Create a new class named CustomJsonRequest.java, and make sure it extends JsonObjectRequest. Next, override the getPriority method as shown below.

Step 4: Fetching Data

We’re finally arrived to the most interesting part of this tutorial in which we write the implementation to fetch the weather data. The endpoint of the request is:

The APIs are browsable so open the link to inspect the resulting JSON. The JSON contains a simple object, result, that includes a series of strings, ranging from temperatures to wind direction and sunset time.

Start by declaring the following variables in the MainActivity class:

You can call MarsWeather.getInstance outside of onCreate. Since the class will already be initialized, you don’t need to wait for the onStart method to call it. Of course, you have to set the references of the user interface views in the onCreate method.

After doing that, it's time to implement the loadWeatherData method. We create a custom Volley request and set the priority to HIGH. We then invoke the helper's add method to add it to the request queue. The important thing to note is the result listener, since it's going to affect the user interface.

As you can see, the method takes the minimum and maximum temperatures, computes the average temperature, and update the user interface. I also implemented a simple method to handles errors.

We now only need to call loadWeatherData in onCreate and you're done. The app is now ready to show the weather of Mars.

3. Fetching Image Data

Now that you have the core of the app ready and working, we can focus on making the app visually more appealing. We are going to do this by fetching a random Mars image and displaying it to the user.

Step 1: Fetch a Random Picture

You will need a Flickr API key to fetch a random list of contextualized images. The image endpoint is the following:

As you can see, the request is fairly simple. You are telling Flickr to give you results formatted as JSON (format=json), but we don't specify a JSON callback (nojsoncallback=1). You are searching an image (method=flickr.photos.search) and the tags you are interested in are related to Mars (tags=mars,planet,rover). Take a look at the documentation for more information about the format of the request URL.

Start by declaring the following variables:

Next, implement the searchRandomImage method:

As you can see, Flickr sends back a JSONArray containing the images. The method I wrote to fetch a random image generates a random number between zero and to the size of the array. It takes the item corresponding to that index from the array of results and constructs the URL for the image following these guidelines.

Like before, we need a method for error handling:

Finally, call searchRandomImage in the onCreate method and don't forget to catch any exceptions.

Step 2: Show the Picture

Now that we have an URL to load, we can show the picture. You already learned how to do this in the previous article.

In the onResponse method we wrote in the previous step, we are finally able to handle the result.

Step 3: Showing a New Image Every Day

Maybe you already noticed that we are bypassing Volley's caching system by fetching a random image every time the application is launched. We need to find a way to show the same image on a particular day.

The simplest way to achieve this is by using Android’s SharedPreferences. Start by declaring the variables we'll need for this.

Next, in the onCreate method, before the call to searchRandomImage, initialize mSharedPref.

The idea is to store the current day every time we fetch a new random picture. Of course, we store the URL of the image alongside the day. When the application launches, we check whether we already have an entry in the SharedPreferences for the current day. If we have a match, we use the stored URL. Otherwise we fetch a random image and store its URL in the SharedPreferences.

In searchRandomImage, after the definition of imageUrl, add the following lines of code:

TheonCreate method, after the definition on mSharedPref, now becomes:

That's it. Your application is ready. Feel free to download the source files of this tutorial on GitHub to see the completed project. Take a look at the project it if you're running into issues.

Bonus Tip: Improving the User Interface

Step 1: Font

The font used in a user interface often determines the look and feel of an application. Let's start by changing the default Roboto font with a more appealing font, such as Lato light.

Create a new folder named fonts in the assets folder. If you can’t find the assets folder, you have to create it at the same level as the java folder. The folder structure should look something like app\src\main\assets\fonts.

Copy the file Lato-light.ttf in the fonts folder. In the onCreate method, you need to override the default typeface of the views in which you'd like to use the new font.

Step 2: Transparent Status Bar

Following the guidelines for Android Material Design, we can make the status bar transparent. This way, the background will be partially visible through the status bar.

You can achieve this by making a small change in the application's theme. Edit the project's v21\style.xml file like this:

Make sure that the AndroidManifest.xml is already set to use the theme:

Conclusion

We made a long journey. In the first article, we started talking about Volley and its applications. In this tutorial, we looked at a practical way to implement the concepts we learned by building a weather application for Mars. You should now have a good understanding of the Volley library, how it works, and what you can use it for.

2015-05-20T16:55:41.000Z2015-05-20T16:55:41.000ZGianluca Segato

Design Patterns: Delegation

$
0
0

The delegation pattern is among the most common patterns in iOS and OS X development. It is a simple pattern that is heavily used by Apple's frameworks and even the simplest iOS application leverages delegation to do its work. Let's start by looking at the definition of delegation.

1. What Is Delegation?

Definition

The definition of the delegation pattern is short and simple. This is how Apple defines the pattern.

A delegate is an object that acts on behalf of, or in coordination with, another object when that object encounters an event in a program.

Let's break that down. The delegation pattern involves two objects, the delegate and the delegating object. The UITableView class, for example, defines a delegate property to which it delegates events. The delegate property needs to conform to the UITableViewDelegate protocol, which is defined in the header file of the UITableView class.

In this example, the table view instance is the delegating object. The delegate is usually a view controller, but it can be any object that conforms to the UITableViewDelegate protocol. If you're unfamiliar with protocols, a class conforms to a protocol if it implements the required methods of the protocol. We'll look at an example a bit later.

When the user taps a row in the table view, the table view notifies its delegate by sending it a message of tableView(_:didSelectRowAtIndexPath:). The first argument of this method is the table view sending the message. The second argument is the index path of the row the user tapped.

The table view only notifies its delegate of this event. It is up to the delegate to decide what needs to happen when such an event occurred. This separation of responsibilities, as you'll learn in a moment, is one of the key benefits of the delegation pattern.

Advantages

Reusability

Delegation has several advantages, the first one being reusability. Because the table view delegates user interaction to its delegate, the table view doesn't need to know what needs to happen when one of its rows is tapped.

Put differently, the table view can remain ignorant of the implementation details of how user interaction is handled by the application. This responsibility is delegated to the delegate, a view controller for example.

The direct benefit is that the UITableView class can be used as is in most situations. Most of the times, there's no need to subclass UITableView to adapt it to your application's needs.

Loose Coupling

Another important advantage of delegation is loose coupling. In my article about singletons, I emphasize that tight coupling should be avoided as much as possible. Delegation is a design pattern that actively promotes loose coupling. What do I mean by that?

The UITableView class is coupled to its delegate to do its work. If no delegate is associated with the table view, the table view cannot handle or respond to user interaction. This means that there needs to be a certain level of coupling. The table view and its delegate, however, are loosely coupled, because every class that implements the UITableViewDelegate protocol can act as the table view's delegate. The result is a flexible and loosely coupled object graph.

Separation of Responsibilities

A lesser known advantage of delegation is separation of responsibilities. Whenever you create an object graph, it is important to know which objects are responsible for which tasks. The delegation pattern makes this very clear.

In the case of the UITableView class, the delegate of the table view is responsible for handling user interaction. The table view itself is responsible for detecting user interaction. This is a clear separation of responsibilities. Such a separation makes your job as a developer much easier and clearer.

2. Example

There are a few flavors of the delegation pattern. Let's continue by further exploring the UITableViewDelegate protocol.

Delegation

The UITableViewDelegate protocol needs to be implemented by the table view's delegate. The table view notifies its delegate through the UITableViewDelegate protocol about user interaction, but it also uses the delegate for its layout.

An important difference between Swift and Objective-C is the possibility to mark protocol methods as optional. In Objective-C, the methods of a protocol are required by default. The methods of the UITableViewDelegate protocol, however, are optional. In other words, it is possible for a class to conform to the UITableViewDelegate protocol without implementing any of the protocol's methods.

In Swift, however, a class conforming to a particular protocol is required to implement every method defined by the protocol. This is much safer since the delegating object doesn't need to verify whether the delegate implements a protocol method. This subtle, but important, difference is illustrated later in this tutorial when we implement the delegation pattern.

Data Source

There is another pattern that is closely related to the delegation pattern, the data source pattern. The UITableViewDataSource protocol is an example of this pattern. The UITableView class exposes a dataSource property that is of type UITableViewDataSource (id<UITableViewDataSource> in Objective-C). This means that the table view's data source can be any object that implements the UITableViewDataSource protocol.

The data source object is responsible for managing the data source of the object it is the data source of. It's important to note that the data source object is responsible for keeping a reference to the items it exposes to the target object, such as a table view or collection view.

A table view, for example, asks its data source for the data it needs to display. The table view is not responsible for keeping a hold of the data objects it needs to display. That role is handed to the data source object.

The data source pattern fits nicely in the Model-View-Controller or MVC pattern. Why is that? A table view, for example, is part of the view layer. It doesn't and shouldn't know about the model layer and isn't in charge of handling the data that is coming from the model layer. This implies that the data source of a table view, or any other view component that implements the data source pattern, is often a controller of some sort. On iOS, it's usually a UIViewController subclass.

The method signatures of a data source protocol follow the same pattern as those of a delegate protocol. The object sending the messages to the data source is passed as the first argument. The data source protocol should only define methods that relate to the data that's being used by the requesting object.

A table view, for example, asks its data source for the number of sections and rows it should display. But it also notifies the data source that a row or section was inserted or deleted. The latter is important since the data source needs to update itself to reflect the changes visible in the table view. If the table view and the data source get out of sync, bad things happen.

3. Implementation

Objective-C

Implementing the delegate pattern is pretty simple now that we understand how it works. Take a look at the following Objective-C example.

We declare a class, AddItemViewController, which extends UIViewController. The class declares a property, delegate, of type id<AddItemViewControllerDelegate>. Note that the property is marked as weak, which means that an AddItemViewController instance keeps a weak reference to its delegate.

Also note that I've added a forward protocol declaration below the import statement of the UIKit framework. This is necessary to avoid a compiler warning. We could move the protocol declaration below the import statement, but I prefer to put it below the class interface. This is nothing more than a personal preference.

The protocol declaration is also pretty simple. The AddItemViewControllerDelegate protocol extends the NSObject protocol. This isn't mandatory, but it will prove to be very useful. We'll find out why that is a bit later.

The AddItemViewControllerDelegate protocol declares two required methods and one optional method. As I mentioned earlier, it's a good practice to pass the delegating object as the first parameter of every delegate method to inform the delegate which object is sending the message.

The required methods notify the delegate about an event, a cancelation or an addition. The optional method asks the delegate for feedback. It expects the delegate to return YES or NO.

This is the first piece of the delegation puzzle. We've declared a class that declares a delegate property and we've declared a delegate protocol. The second piece of the puzzle is invoking the delegate methods in the AddItemViewController class. Let's see how that works.

In the implementation of the AddItemViewController class, we implement a cancel: action. This action could be hooked up to a button in the user interface. If the user taps the button, the delegate is notified of this event and, as a result, the delegate could dismiss the AddItemViewController instance.

It is recommended to verify that the delegate object isn't nil and that it implements the delegate method we're about to invoke, viewControllerDidCancel:. This is easy thanks to the respondsToSelector: method, declared in the NSObject protocol. This is the reason why the AddItemViewControllerDelegate protocol extends the NSObject protocol. By extending the NSObject protocol, we get this functionality for free.

You can omit the check for the delegate property being nil, since respondsToSelector: will return nil if the delegate property is nil. I usually add this check since it clearly shows what we're testing.

The third and final piece of the puzzle is the implementation of the delegate protocol by the delegate object. The following code snippet shows the creation of an AddItemViewController instance and the implementation of one of the delegate methods.

Don't forget to conform the class that acts as the delegate to the AddItemViewControllerDelegate protocol as shown below. You can add this in the class interface or in a private class extension.

Swift

In Swift, the delegation pattern is just as easy to implement and you'll find that Swift makes delegation slightly more elegant. Let's implement the above example in Swift. This is what the AddItemViewController class looks like in Swift.

The protocol declaration looks a bit different in Swift. Note that the AddItemViewControllerDelegate protocol extends the NSObjectProtocol instead of the NSObject protocol. In Swift, classes and protocols cannot have the same name, which is why the NSObject protocol is named differently in Swift.

The delegate property is a variable of type AddItemViewControllerDelegate?. Note the question mark at the end of the protocol name. The delegate property is an optional.

In the cancel(_:) method, we invoke the viewControllerDidCancel(_:) delegate method. That single line shows how elegant Swift can be. We safely unwrap the delegate property before invoking the delegate method. There's no need to check if the delegate implements the viewControllerDidCancel(_:) method since every method of a protocol is required in Swift.

Let's now look at the ViewController class, which implements the AddItemViewControllerDelegate protocol. The interface shows us that the ViewController class extends the UIViewController class and adopts the AddItemViewControllerDelegate protocol.

In the addItem(_:) method, we initialize an instance of the AddItemViewController class, set its delegate property, and present it to the user. Note that we've implemented every delegate method of the AddItemViewControllerDelegate protocol. If we don't, the compiler will tell us that the ViewController class doesn't conform to the AddItemViewControllerDelegate protocol. Try this out by commenting out one of the delegate methods.

Swift Protocol Implementation Warning

Conclusion

Delegation is a pattern you'll come across frequently when developing iOS and OS X applications. Cocoa relies heavily on this design pattern so it's important to become familiar with it.

Since the introduction of blocks, a few years ago, Apple has slowly offered an alternative blocks-based API to some delegation implementations. Some developers have followed Apple's lead by offering their own blocks-based alternatives. The popular AFNetworking library, for example, relies heavily on blocks instead of delegation, resulting in an elegant, intuitive API.

2015-05-22T17:30:08.000Z2015-05-22T17:30:08.000ZBart Jacobs

Design Patterns: Delegation

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-23901

The delegation pattern is among the most common patterns in iOS and OS X development. It is a simple pattern that is heavily used by Apple's frameworks and even the simplest iOS application leverages delegation to do its work. Let's start by looking at the definition of delegation.

1. What Is Delegation?

Definition

The definition of the delegation pattern is short and simple. This is how Apple defines the pattern.

A delegate is an object that acts on behalf of, or in coordination with, another object when that object encounters an event in a program.

Let's break that down. The delegation pattern involves two objects, the delegate and the delegating object. The UITableView class, for example, defines a delegate property to which it delegates events. The delegate property needs to conform to the UITableViewDelegate protocol, which is defined in the header file of the UITableView class.

In this example, the table view instance is the delegating object. The delegate is usually a view controller, but it can be any object that conforms to the UITableViewDelegate protocol. If you're unfamiliar with protocols, a class conforms to a protocol if it implements the required methods of the protocol. We'll look at an example a bit later.

When the user taps a row in the table view, the table view notifies its delegate by sending it a message of tableView(_:didSelectRowAtIndexPath:). The first argument of this method is the table view sending the message. The second argument is the index path of the row the user tapped.

The table view only notifies its delegate of this event. It is up to the delegate to decide what needs to happen when such an event occurred. This separation of responsibilities, as you'll learn in a moment, is one of the key benefits of the delegation pattern.

Advantages

Reusability

Delegation has several advantages, the first one being reusability. Because the table view delegates user interaction to its delegate, the table view doesn't need to know what needs to happen when one of its rows is tapped.

Put differently, the table view can remain ignorant of the implementation details of how user interaction is handled by the application. This responsibility is delegated to the delegate, a view controller for example.

The direct benefit is that the UITableView class can be used as is in most situations. Most of the times, there's no need to subclass UITableView to adapt it to your application's needs.

Loose Coupling

Another important advantage of delegation is loose coupling. In my article about singletons, I emphasize that tight coupling should be avoided as much as possible. Delegation is a design pattern that actively promotes loose coupling. What do I mean by that?

The UITableView class is coupled to its delegate to do its work. If no delegate is associated with the table view, the table view cannot handle or respond to user interaction. This means that there needs to be a certain level of coupling. The table view and its delegate, however, are loosely coupled, because every class that implements the UITableViewDelegate protocol can act as the table view's delegate. The result is a flexible and loosely coupled object graph.

Separation of Responsibilities

A lesser known advantage of delegation is separation of responsibilities. Whenever you create an object graph, it is important to know which objects are responsible for which tasks. The delegation pattern makes this very clear.

In the case of the UITableView class, the delegate of the table view is responsible for handling user interaction. The table view itself is responsible for detecting user interaction. This is a clear separation of responsibilities. Such a separation makes your job as a developer much easier and clearer.

2. Example

There are a few flavors of the delegation pattern. Let's continue by further exploring the UITableViewDelegate protocol.

Delegation

The UITableViewDelegate protocol needs to be implemented by the table view's delegate. The table view notifies its delegate through the UITableViewDelegate protocol about user interaction, but it also uses the delegate for its layout.

An important difference between Swift and Objective-C is the possibility to mark protocol methods as optional. In Objective-C, the methods of a protocol are required by default. The methods of the UITableViewDelegate protocol, however, are optional. In other words, it is possible for a class to conform to the UITableViewDelegate protocol without implementing any of the protocol's methods.

In Swift, however, a class conforming to a particular protocol is required to implement every method defined by the protocol. This is much safer since the delegating object doesn't need to verify whether the delegate implements a protocol method. This subtle, but important, difference is illustrated later in this tutorial when we implement the delegation pattern.

Data Source

There is another pattern that is closely related to the delegation pattern, the data source pattern. The UITableViewDataSource protocol is an example of this pattern. The UITableView class exposes a dataSource property that is of type UITableViewDataSource (id<UITableViewDataSource> in Objective-C). This means that the table view's data source can be any object that implements the UITableViewDataSource protocol.

The data source object is responsible for managing the data source of the object it is the data source of. It's important to note that the data source object is responsible for keeping a reference to the items it exposes to the target object, such as a table view or collection view.

A table view, for example, asks its data source for the data it needs to display. The table view is not responsible for keeping a hold of the data objects it needs to display. That role is handed to the data source object.

The data source pattern fits nicely in the Model-View-Controller or MVC pattern. Why is that? A table view, for example, is part of the view layer. It doesn't and shouldn't know about the model layer and isn't in charge of handling the data that is coming from the model layer. This implies that the data source of a table view, or any other view component that implements the data source pattern, is often a controller of some sort. On iOS, it's usually a UIViewController subclass.

The method signatures of a data source protocol follow the same pattern as those of a delegate protocol. The object sending the messages to the data source is passed as the first argument. The data source protocol should only define methods that relate to the data that's being used by the requesting object.

A table view, for example, asks its data source for the number of sections and rows it should display. But it also notifies the data source that a row or section was inserted or deleted. The latter is important since the data source needs to update itself to reflect the changes visible in the table view. If the table view and the data source get out of sync, bad things happen.

3. Implementation

Objective-C

Implementing the delegate pattern is pretty simple now that we understand how it works. Take a look at the following Objective-C example.

We declare a class, AddItemViewController, which extends UIViewController. The class declares a property, delegate, of type id<AddItemViewControllerDelegate>. Note that the property is marked as weak, which means that an AddItemViewController instance keeps a weak reference to its delegate.

Also note that I've added a forward protocol declaration below the import statement of the UIKit framework. This is necessary to avoid a compiler warning. We could move the protocol declaration below the import statement, but I prefer to put it below the class interface. This is nothing more than a personal preference.

The protocol declaration is also pretty simple. The AddItemViewControllerDelegate protocol extends the NSObject protocol. This isn't mandatory, but it will prove to be very useful. We'll find out why that is a bit later.

The AddItemViewControllerDelegate protocol declares two required methods and one optional method. As I mentioned earlier, it's a good practice to pass the delegating object as the first parameter of every delegate method to inform the delegate which object is sending the message.

The required methods notify the delegate about an event, a cancelation or an addition. The optional method asks the delegate for feedback. It expects the delegate to return YES or NO.

This is the first piece of the delegation puzzle. We've declared a class that declares a delegate property and we've declared a delegate protocol. The second piece of the puzzle is invoking the delegate methods in the AddItemViewController class. Let's see how that works.

In the implementation of the AddItemViewController class, we implement a cancel: action. This action could be hooked up to a button in the user interface. If the user taps the button, the delegate is notified of this event and, as a result, the delegate could dismiss the AddItemViewController instance.

It is recommended to verify that the delegate object isn't nil and that it implements the delegate method we're about to invoke, viewControllerDidCancel:. This is easy thanks to the respondsToSelector: method, declared in the NSObject protocol. This is the reason why the AddItemViewControllerDelegate protocol extends the NSObject protocol. By extending the NSObject protocol, we get this functionality for free.

You can omit the check for the delegate property being nil, since respondsToSelector: will return nil if the delegate property is nil. I usually add this check since it clearly shows what we're testing.

The third and final piece of the puzzle is the implementation of the delegate protocol by the delegate object. The following code snippet shows the creation of an AddItemViewController instance and the implementation of one of the delegate methods.

Don't forget to conform the class that acts as the delegate to the AddItemViewControllerDelegate protocol as shown below. You can add this in the class interface or in a private class extension.

Swift

In Swift, the delegation pattern is just as easy to implement and you'll find that Swift makes delegation slightly more elegant. Let's implement the above example in Swift. This is what the AddItemViewController class looks like in Swift.

The protocol declaration looks a bit different in Swift. Note that the AddItemViewControllerDelegate protocol extends the NSObjectProtocol instead of the NSObject protocol. In Swift, classes and protocols cannot have the same name, which is why the NSObject protocol is named differently in Swift.

The delegate property is a variable of type AddItemViewControllerDelegate?. Note the question mark at the end of the protocol name. The delegate property is an optional.

In the cancel(_:) method, we invoke the viewControllerDidCancel(_:) delegate method. That single line shows how elegant Swift can be. We safely unwrap the delegate property before invoking the delegate method. There's no need to check if the delegate implements the viewControllerDidCancel(_:) method since every method of a protocol is required in Swift.

Let's now look at the ViewController class, which implements the AddItemViewControllerDelegate protocol. The interface shows us that the ViewController class extends the UIViewController class and adopts the AddItemViewControllerDelegate protocol.

In the addItem(_:) method, we initialize an instance of the AddItemViewController class, set its delegate property, and present it to the user. Note that we've implemented every delegate method of the AddItemViewControllerDelegate protocol. If we don't, the compiler will tell us that the ViewController class doesn't conform to the AddItemViewControllerDelegate protocol. Try this out by commenting out one of the delegate methods.

Swift Protocol Implementation Warning

Conclusion

Delegation is a pattern you'll come across frequently when developing iOS and OS X applications. Cocoa relies heavily on this design pattern so it's important to become familiar with it.

Since the introduction of blocks, a few years ago, Apple has slowly offered an alternative blocks-based API to some delegation implementations. Some developers have followed Apple's lead by offering their own blocks-based alternatives. The popular AFNetworking library, for example, relies heavily on blocks instead of delegation, resulting in an elegant, intuitive API.

2015-05-22T17:30:08.000Z2015-05-22T17:30:08.000ZBart Jacobs

Using Android's VectorDrawable Class

$
0
0

Introduction

While Android does not support SVGs (Scalable Vector Graphics) directly, Lollipop introduced a new class called VectorDrawable, which allows designers and developers to draw assets in a similar fashion using only code.

In this article, you will learn how to create a VectorDrawable with XML files and animate them in your projects. This is only supported for devices running Android 5.0 or above, and currently there are no support-library implementations. The source files of this tutorial can be found on GitHub.

1. Creating a Vector Drawable

The main similarity between a VectorDrawable and a standard SVG image is that both are drawn out using a path value. While understanding how SVG paths are drawn is out of the scope of this article, official documentation can be found on the W3C website. For this article, you'll simply need to know that the path tag is where the drawing occurs. Let's take a look at the SVG file that draws out the following image:

Image of a CPU that will be drawn out in code

There are five major parts to this image:

  • a square for the CPU body made up of two arches
  • four groups of five lines that represent the CPU's wires

The following code draws this image out as an SVG:

While this may look a little overwhelming, you don't actually need to fully understand how everything is drawn out to implement a VectorDrawable in your code. However, it should be noted that I separated each of the five sections into their own unique block in the code for readability.

The top section consists of two arches to draw out the rounded square and the sections that follow represent the bottom, top, right, and left sets of lines respectively. To turn this SVG code into a VectorDrawable, you first need to define the vector object in XML. The following code is taken from the vector_drawable_cpu.xml file in the sample code for this article.

Next, you can add in the path data. The following code is broken up into five different path tags rather than one large path.

As you can see, each path section simply uses the pathData attribute for drawing. You can now include the VectorDrawable XML file as a drawable in a standard ImageView and it will scale to any size your app requires, without needing to use any Java code.

2. Animating Vector Drawables

Now that you know how to create images using only code, it's time to have a little fun and animate them. In the following animation, you'll notice that each of the groups of wires are pulsing towards and away from the CPU.

Example of animated VectorDrawables

To achieve this effect, you will need to wrap each section that you want to animate in a <group> tag. The updated version of vector_drawable_cpu.xml then looks like this:

Next, you will want to create animators for each animation type. In this case, there is one for each group of wires for a total of four. Below is an example of the top group's animation and you will also need one for the bottom, left, and right. Each of the animator XML files can be found in the sample code.

As you can see, the propertyName is set to translateY, which means the animation will move along the Y axis. The valueFrom and valueTo control the begin and end location. By setting repeatMode to reverse and repeatCount to infinite, the animation will loop forever as long as the VectorDrawable is visible. The duration of the animation is set to 250, which is the time in milliseconds.

To apply the animations to your drawable file, you will need to create a new animated-vector XML file to associate the animators with the VectorDrawable groups. The following code is used to create the animated_cpu.xml file.

When all of your XML are ready to go, you can use the animated_cpu.xml drawable in an ImageView to display it.

To start your animation, you will need to get an instance of the Animatable from the ImageView and call start.

After start has been called, the wires on the CPU image will start to move with very minimal Java code used.

3. Transforming Vector Drawables

A common use case for a VectorDrawable is transforming one image into another, such as the action bar icon that changes from a hamburger icon into an arrow. To do this, both the source and destination paths must follow an identical format for the number of elements. For this example we will define the left and right facing arrows seen above as strings.

Next, you will need to create an initial drawable for an arrow using the path for left_arrow. In the sample code, it is called vector_drawable_left_arrow.xml.

The main difference between the CPU animation and the transformation lies in the animator_left_right_arrow.xml file.

You'll notice the valueFrom and valueTo properties reference the path data for the left and right arrow, the valueType is set to pathType and propertyName is set to pathData. When these are set, the animator will know to change one set of path data to the other. When the animator is finished, you need to associate the VectorDrawable with the objectAnimator using a new animated-vector object.

Finally, you'll simply need to associate the animated drawable with an ImageView and start the animation in your Java code.

Conclusion

As you have seen, the VectorDrawable class is fairly straightforward to use and allows for a lot of customization to add simple animations. While the VectorDrawable class is currently only available for devices running Android 5.0 and above, they will be invaluable as more devices support Lollipop and future Android releases.

2015-05-25T16:50:32.000Z2015-05-25T16:50:32.000ZPaul Trebilcox-Ruiz

Using Android's VectorDrawable Class

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-23948

Introduction

While Android does not support SVGs (Scalable Vector Graphics) directly, Lollipop introduced a new class called VectorDrawable, which allows designers and developers to draw assets in a similar fashion using only code.

In this article, you will learn how to create a VectorDrawable with XML files and animate them in your projects. This is only supported for devices running Android 5.0 or above, and currently there are no support-library implementations. The source files of this tutorial can be found on GitHub.

1. Creating a Vector Drawable

The main similarity between a VectorDrawable and a standard SVG image is that both are drawn out using a path value. While understanding how SVG paths are drawn is out of the scope of this article, official documentation can be found on the W3C website. For this article, you'll simply need to know that the path tag is where the drawing occurs. Let's take a look at the SVG file that draws out the following image:

Image of a CPU that will be drawn out in code

There are five major parts to this image:

  • a square for the CPU body made up of two arches
  • four groups of five lines that represent the CPU's wires

The following code draws this image out as an SVG:

While this may look a little overwhelming, you don't actually need to fully understand how everything is drawn out to implement a VectorDrawable in your code. However, it should be noted that I separated each of the five sections into their own unique block in the code for readability.

The top section consists of two arches to draw out the rounded square and the sections that follow represent the bottom, top, right, and left sets of lines respectively. To turn this SVG code into a VectorDrawable, you first need to define the vector object in XML. The following code is taken from the vector_drawable_cpu.xml file in the sample code for this article.

Next, you can add in the path data. The following code is broken up into five different path tags rather than one large path.

As you can see, each path section simply uses the pathData attribute for drawing. You can now include the VectorDrawable XML file as a drawable in a standard ImageView and it will scale to any size your app requires, without needing to use any Java code.

2. Animating Vector Drawables

Now that you know how to create images using only code, it's time to have a little fun and animate them. In the following animation, you'll notice that each of the groups of wires are pulsing towards and away from the CPU.

Example of animated VectorDrawables

To achieve this effect, you will need to wrap each section that you want to animate in a <group> tag. The updated version of vector_drawable_cpu.xml then looks like this:

Next, you will want to create animators for each animation type. In this case, there is one for each group of wires for a total of four. Below is an example of the top group's animation and you will also need one for the bottom, left, and right. Each of the animator XML files can be found in the sample code.

As you can see, the propertyName is set to translateY, which means the animation will move along the Y axis. The valueFrom and valueTo control the begin and end location. By setting repeatMode to reverse and repeatCount to infinite, the animation will loop forever as long as the VectorDrawable is visible. The duration of the animation is set to 250, which is the time in milliseconds.

To apply the animations to your drawable file, you will need to create a new animated-vector XML file to associate the animators with the VectorDrawable groups. The following code is used to create the animated_cpu.xml file.

When all of your XML are ready to go, you can use the animated_cpu.xml drawable in an ImageView to display it.

To start your animation, you will need to get an instance of the Animatable from the ImageView and call start.

After start has been called, the wires on the CPU image will start to move with very minimal Java code used.

3. Transforming Vector Drawables

A common use case for a VectorDrawable is transforming one image into another, such as the action bar icon that changes from a hamburger icon into an arrow. To do this, both the source and destination paths must follow an identical format for the number of elements. For this example we will define the left and right facing arrows seen above as strings.

Next, you will need to create an initial drawable for an arrow using the path for left_arrow. In the sample code, it is called vector_drawable_left_arrow.xml.

The main difference between the CPU animation and the transformation lies in the animator_left_right_arrow.xml file.

You'll notice the valueFrom and valueTo properties reference the path data for the left and right arrow, the valueType is set to pathType and propertyName is set to pathData. When these are set, the animator will know to change one set of path data to the other. When the animator is finished, you need to associate the VectorDrawable with the objectAnimator using a new animated-vector object.

Finally, you'll simply need to associate the animated drawable with an ImageView and start the animation in your Java code.

Conclusion

As you have seen, the VectorDrawable class is fairly straightforward to use and allows for a lot of customization to add simple animations. While the VectorDrawable class is currently only available for devices running Android 5.0 and above, they will be invaluable as more devices support Lollipop and future Android releases.

2015-05-25T16:50:32.000Z2015-05-25T16:50:32.000ZPaul Trebilcox-Ruiz

An Introduction to Appium

$
0
0
What You'll Be Creating

Automated testing is known to be very valuable to any programmer. It is a tool that allows the simulation of a person’s actions on a specific device and it’s favored because it lacks the errors or speed limitations of an actual person.

Appium is an automated testing tool, based on the popular testing framework Selenium, that enables automated testing on both native iOS and Android apps. Its main limitation is that it is only built for OS X and Linux.

At my office, we use Appium for regression testing. Regression testing simply means testing existing features to ensure they continue to function as expected as the product grows. It’s very important to know when features break so that progress can continue in a linear fashion.

In this tutorial, I'll show you how to set up Appium, generate automated scripts, and create a few simple login tests for an Android application.

1. Appium Setup

Getting Appium doesn't take much time, but it’s easy to mess up the setup. The first time I installed Appium, I naively downloaded the application (.dmg) before setting it up on the command line. It turns out that if you download the application first, it may make getting Appium on the command line quite difficult. So start by getting Appium from the command line.

In the following steps, I am assuming you have homebrew installed and are using a OS X. If you don’t have node installed, executed the following command from the command line:

Next, install Appium using the node package manager. It's important that you do not use sudo for these commands or Appium will not work.

If you don’t have the permissions to use these commands, you’ll have to chmod them yourself rather than sudo. The location of the folder may be different for you, depending on your setup.

To run the Appium server and see if you have set it up correctly, execute the following command from the command line.

Appium on the command line allows you to run Selenium tests not only on Android and iOS simulators, but also on physical devices. The application has a nice user interface that allows you to run a simulated version of your AUT (Application Under Testing) and easily generate Selenium code for simple actions on the application. You will mainly use the application in the initial phase of creating tests and then use the command line for running tests.

Why don't we use the application for the entire flow? If you plan on running a suite of tests on various devices in an automated fashion, perhaps on a schedule, being able to run Appium from the command line will be essential.

The Appium application can be downloaded from Bitbucket. After downloading the disk image, double-click it and drag the Appium application to your Applications folder.

2. AUT Setup

Next, we need an application to test. For this introduction to Appium, we will test a native Android application that I’m calling AUT. It is a very simple login and logout application. On a successful login, it’ll bring us to a page that tells us we successfully logged in, displaying a logout button.

Many apps have a login feature so we will create a basic suite of login tests to test the possible outcomes of a user interacting with the login flow. It’s not so much that we want to make sure that login works, we want to test the app’s response to the various ways a user can fail to login, for example, by entering invalid credentials.

Since it's impossible to own every available Android device, I usually test on simulated devices. This allows me to easily change which device is being simulated for compatibility testing. To get an Android simulator, get AVD Manager and set up any Android device of your choosing, compatible with API level 21.

  1. Download the APK from GitHub.
  2. Get JDK, if you don’t already have it.
  3. Get the Android SDK with the AVD Manager.
  4. Set the ANDROID_HOME, JAVA_HOME, and PATH environment variables in your .profile or .bash_profile (or .zshrc if you use zsh).

ANDROID_HOME should point to the location of the Android sdk while JAVA_HOME should point to the location of the JDK.

This is how you can add these paths to your .bash_profile. Note that the paths may be different for you.

Next, create a simulated device with the AVD Manager. Make sure to enable Use host GPU and set the VM Heap to 64.

To make the simulator run faster, install HAX from Intel's website.

3. Appium Inspector

It's time to use the Appium Inspector and start writing some tests. Launch the Appium application. In General Settings, uncheck Check for Updates, Prelaunch Application, Override Existing Sessions, and Kill Processes Using Server Port Before Launch.

Appium General Settings

Next, check the Android Radio Button and click the Android Icon. Check App Path and set its value to the location of where you put the APK of the application under test. Check Launch AVD and select the simulated device. Choose 5.1 Lollipop (API Level 21) from the dropdown menu from Platform Version.

Selecting Android
Android Settings

Hit Launch and wait for the app to launch on the simulated device.

You may run into an issue where the application crashes on unlock since we’re using the brand new API 21. To resolve this, launch the application again after manually unlocking the simulated screen.

Once the simulated device has launched the app, hit the Magnifying Glass icon to launch the inspector.

Appium Inspector

This is the Appium inspector. It’s a very convenient tool to help you get started with writing tests for Appium. Essentially, the inspector allows you to perform actions on the native Android application and record your actions as generated code.

The boxes on the left side of the inspector make up the UI Navigator and allow you to navigate the elements of the current activity. At the bottom are the options to interact with the element selected from the element boxes. Details of the selected element are shown in Details. If you do something manually to the simulation, you must hit Refresh for the inspector to recognize those changes. If you want to start to record your actions in code, you must hit the Record button.

Let’s create the code necessary for a successful login. The app has two hardcoded logins, success@envato.com:password and success2@envato.com:password2.

  1. Clicking Record and observe that there's now code below the inspector. You can choose different languages for this to show in. For this tutorial, we will use Python.
  2. In the UI Navigator navigate to  android.widget.LinearLayout/android.widget.FrameLayout/
    android.widget.LinearLayout/android.widget.ScrollView/
    android.widget.LinearLayout/android.widget.EditText[1].
  3. Click Text at the bottom and enter success@envato.com.
  4. Click Send Keys, and observe that the code below now has a new line.
Inspector Send Keys to Email

5. In the UI Navigator, navigate to  android.widget.LinearLayout/
android.widget.FrameLayout/android.widget.LinearLayout/
android.widget.ScrollView/android.widget.LinearLayout/
android.widget.EditText[2]

6. Enter the password.

Inspector Send Keys to Password

7. Click Send Keys.

8. In the UI Navigator, navigate to android.widget.LinearLayout/android.widget.FrameLayout/
android.widget.LinearLayout/android.widget.ScrollView/
android.widget.LinearLayout/android.widget.Button[1]

9. Click Touch at the bottom, followed by Tap. We're now at a new activity so the UI Navigator has changed.

Inspector Tap Sign In

10. Hit Refresh since the inspector probably hasn't realized that the simulator is past the loading screen now.

Inspector Not Refreshed

11. In the UI Navigator navigate to android.widget.LinearLayout/android.widget.FrameLayout/
android.widget.RelativeLayout/android.widget.Button[1].

12. Click Tap.

Inspector Tap Logout

In the code below, we have all the code to simulate a successful login. Feel free to play a bit more with the inspector. Later in this tutorial, we will also be writing tests for unsuccessful logins.

4. Login Tests

We will now write some tests using Appium to make sure our login page works as it should. If you don't have Python, then you can download it from the official website.

To use the Appium web driver with Python, you must also get the Appium libraries for Python.

  1. Download and unarchive the .gz file.
  2. Open the .gz file, navigate to the location on the command line, and execute the following command:

Before we start writing the tests, we'll need to decide which test cases we will be making. A good test suite should consider every possible interaction. Let's start with a simple one.

  1. A successful login.
  2. An unsuccessful login.

First, we must import everything we need for the test. We will use the built-in Python unit test to run our tests. The Appium element is the web driver, which we will use to interact with the Appium server.

We create a class LoginTests to define our suite of tests. The setUp function of our unit test runs at the start of the test. In this method, we set the desired capabilities, such as Android and the app path. We then initialize the web driver self.wd by connecting to the Appium server.

The tearDown function runs after a test and it disconnects from the Appium server.

The above code block is mostly copied from the Appium inspector code. We perform the required actions on the user interface for a successful login. In the try clause, we try to find the textView element that displays the text Login Success! If an exception is thrown, the test fails.

To fail the login test, we deliberately use an incorrect password, wrongpassword, expecting the login to fail. We check if we can find the login button element and fail the test if we cannot.

This is the main function needed to run our tests. There's nothing wrong with our first test case. However, an unsuccessful login could mean many things. When a user is unable to log in, we want to make sure the user interface is helping them realize how to fix their mistake for a better user experience.

  1. A successful login.
  2. Login with incorrect password.
  3. Login with incorrect email.
  4. Login with no password.
  5. Login with no email.
  6. Login with an invalid email.

We've expanded our test cases from two to six test cases for the login page. It might seem like a lot for such a simple feature, but it's absolutely necessary.

The most difficult part of writing tests is checking expectations. For example, the failed login test checks if an element exists in the user interface. This means that the login tests completely rely on the user interface to tell us whether the requirements are met.

This could be a bad thing since the user interface doesn't tell us everything about the underlying code. However, the goal is to test the user interface so to have an expectation check that a user interface element exists is proper. We could make our expectation more thorough by checking that every expected element is present on the page or even by checking that every element is correctly positioned.

Conclusion

We've learned how to:

  • set up Appium
  • use Appium's inspector to help creating automated test scripts
  • use automation to create a few simple login tests for an Android application

There is much more to be learned about Appium and testing. The next step might be to create a continuously integrated testing system that utilizes the capabilities of Appium for your own applications.

2015-05-27T15:15:55.000Z2015-05-27T15:15:55.000ZMatthew Kim

An Introduction to Appium

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-23861
What You'll Be Creating

Automated testing is known to be very valuable to any programmer. It is a tool that allows the simulation of a person’s actions on a specific device and it’s favored because it lacks the errors or speed limitations of an actual person.

Appium is an automated testing tool, based on the popular testing framework Selenium, that enables automated testing on both native iOS and Android apps. Its main limitation is that it is only built for OS X and Linux.

At my office, we use Appium for regression testing. Regression testing simply means testing existing features to ensure they continue to function as expected as the product grows. It’s very important to know when features break so that progress can continue in a linear fashion.

In this tutorial, I'll show you how to set up Appium, generate automated scripts, and create a few simple login tests for an Android application.

1. Appium Setup

Getting Appium doesn't take much time, but it’s easy to mess up the setup. The first time I installed Appium, I naively downloaded the application (.dmg) before setting it up on the command line. It turns out that if you download the application first, it may make getting Appium on the command line quite difficult. So start by getting Appium from the command line.

In the following steps, I am assuming you have homebrew installed and are using a OS X. If you don’t have node installed, executed the following command from the command line:

Next, install Appium using the node package manager. It's important that you do not use sudo for these commands or Appium will not work.

If you don’t have the permissions to use these commands, you’ll have to chmod them yourself rather than sudo. The location of the folder may be different for you, depending on your setup.

To run the Appium server and see if you have set it up correctly, execute the following command from the command line.

Appium on the command line allows you to run Selenium tests not only on Android and iOS simulators, but also on physical devices. The application has a nice user interface that allows you to run a simulated version of your AUT (Application Under Testing) and easily generate Selenium code for simple actions on the application. You will mainly use the application in the initial phase of creating tests and then use the command line for running tests.

Why don't we use the application for the entire flow? If you plan on running a suite of tests on various devices in an automated fashion, perhaps on a schedule, being able to run Appium from the command line will be essential.

The Appium application can be downloaded from Bitbucket. After downloading the disk image, double-click it and drag the Appium application to your Applications folder.

2. AUT Setup

Next, we need an application to test. For this introduction to Appium, we will test a native Android application that I’m calling AUT. It is a very simple login and logout application. On a successful login, it’ll bring us to a page that tells us we successfully logged in, displaying a logout button.

Many apps have a login feature so we will create a basic suite of login tests to test the possible outcomes of a user interacting with the login flow. It’s not so much that we want to make sure that login works, we want to test the app’s response to the various ways a user can fail to login, for example, by entering invalid credentials.

Since it's impossible to own every available Android device, I usually test on simulated devices. This allows me to easily change which device is being simulated for compatibility testing. To get an Android simulator, get AVD Manager and set up any Android device of your choosing, compatible with API level 21.

  1. Download the APK from GitHub.
  2. Get JDK, if you don’t already have it.
  3. Get the Android SDK with the AVD Manager.
  4. Set the ANDROID_HOME, JAVA_HOME, and PATH environment variables in your .profile or .bash_profile (or .zshrc if you use zsh).

ANDROID_HOME should point to the location of the Android sdk while JAVA_HOME should point to the location of the JDK.

This is how you can add these paths to your .bash_profile. Note that the paths may be different for you.

Next, create a simulated device with the AVD Manager. Make sure to enable Use host GPU and set the VM Heap to 64.

To make the simulator run faster, install HAX from Intel's website.

3. Appium Inspector

It's time to use the Appium Inspector and start writing some tests. Launch the Appium application. In General Settings, uncheck Check for Updates, Prelaunch Application, Override Existing Sessions, and Kill Processes Using Server Port Before Launch.

Appium General Settings

Next, check the Android Radio Button and click the Android Icon. Check App Path and set its value to the location of where you put the APK of the application under test. Check Launch AVD and select the simulated device. Choose 5.1 Lollipop (API Level 21) from the dropdown menu from Platform Version.

Selecting Android
Android Settings

Hit Launch and wait for the app to launch on the simulated device.

You may run into an issue where the application crashes on unlock since we’re using the brand new API 21. To resolve this, launch the application again after manually unlocking the simulated screen.

Once the simulated device has launched the app, hit the Magnifying Glass icon to launch the inspector.

Appium Inspector

This is the Appium inspector. It’s a very convenient tool to help you get started with writing tests for Appium. Essentially, the inspector allows you to perform actions on the native Android application and record your actions as generated code.

The boxes on the left side of the inspector make up the UI Navigator and allow you to navigate the elements of the current activity. At the bottom are the options to interact with the element selected from the element boxes. Details of the selected element are shown in Details. If you do something manually to the simulation, you must hit Refresh for the inspector to recognize those changes. If you want to start to record your actions in code, you must hit the Record button.

Let’s create the code necessary for a successful login. The app has two hardcoded logins, success@envato.com:password and success2@envato.com:password2.

  1. Clicking Record and observe that there's now code below the inspector. You can choose different languages for this to show in. For this tutorial, we will use Python.
  2. In the UI Navigator navigate to  android.widget.LinearLayout/android.widget.FrameLayout/
    android.widget.LinearLayout/android.widget.ScrollView/
    android.widget.LinearLayout/android.widget.EditText[1].
  3. Click Text at the bottom and enter success@envato.com.
  4. Click Send Keys, and observe that the code below now has a new line.
Inspector Send Keys to Email

5. In the UI Navigator, navigate to  android.widget.LinearLayout/
android.widget.FrameLayout/android.widget.LinearLayout/
android.widget.ScrollView/android.widget.LinearLayout/
android.widget.EditText[2]

6. Enter the password.

Inspector Send Keys to Password

7. Click Send Keys.

8. In the UI Navigator, navigate to android.widget.LinearLayout/android.widget.FrameLayout/
android.widget.LinearLayout/android.widget.ScrollView/
android.widget.LinearLayout/android.widget.Button[1]

9. Click Touch at the bottom, followed by Tap. We're now at a new activity so the UI Navigator has changed.

Inspector Tap Sign In

10. Hit Refresh since the inspector probably hasn't realized that the simulator is past the loading screen now.

Inspector Not Refreshed

11. In the UI Navigator navigate to android.widget.LinearLayout/android.widget.FrameLayout/
android.widget.RelativeLayout/android.widget.Button[1].

12. Click Tap.

Inspector Tap Logout

In the code below, we have all the code to simulate a successful login. Feel free to play a bit more with the inspector. Later in this tutorial, we will also be writing tests for unsuccessful logins.

4. Login Tests

We will now write some tests using Appium to make sure our login page works as it should. If you don't have Python, then you can download it from the official website.

To use the Appium web driver with Python, you must also get the Appium libraries for Python.

  1. Download and unarchive the .gz file.
  2. Open the .gz file, navigate to the location on the command line, and execute the following command:

Before we start writing the tests, we'll need to decide which test cases we will be making. A good test suite should consider every possible interaction. Let's start with a simple one.

  1. A successful login.
  2. An unsuccessful login.

First, we must import everything we need for the test. We will use the built-in Python unit test to run our tests. The Appium element is the web driver, which we will use to interact with the Appium server.

We create a class LoginTests to define our suite of tests. The setUp function of our unit test runs at the start of the test. In this method, we set the desired capabilities, such as Android and the app path. We then initialize the web driver self.wd by connecting to the Appium server.

The tearDown function runs after a test and it disconnects from the Appium server.

The above code block is mostly copied from the Appium inspector code. We perform the required actions on the user interface for a successful login. In the try clause, we try to find the textView element that displays the text Login Success! If an exception is thrown, the test fails.

To fail the login test, we deliberately use an incorrect password, wrongpassword, expecting the login to fail. We check if we can find the login button element and fail the test if we cannot.

This is the main function needed to run our tests. There's nothing wrong with our first test case. However, an unsuccessful login could mean many things. When a user is unable to log in, we want to make sure the user interface is helping them realize how to fix their mistake for a better user experience.

  1. A successful login.
  2. Login with incorrect password.
  3. Login with incorrect email.
  4. Login with no password.
  5. Login with no email.
  6. Login with an invalid email.

We've expanded our test cases from two to six test cases for the login page. It might seem like a lot for such a simple feature, but it's absolutely necessary.

The most difficult part of writing tests is checking expectations. For example, the failed login test checks if an element exists in the user interface. This means that the login tests completely rely on the user interface to tell us whether the requirements are met.

This could be a bad thing since the user interface doesn't tell us everything about the underlying code. However, the goal is to test the user interface so to have an expectation check that a user interface element exists is proper. We could make our expectation more thorough by checking that every expected element is present on the page or even by checking that every element is correctly positioned.

Conclusion

We've learned how to:

  • set up Appium
  • use Appium's inspector to help creating automated test scripts
  • use automation to create a few simple login tests for an Android application

There is much more to be learned about Appium and testing. The next step might be to create a continuously integrated testing system that utilizes the capabilities of Appium for your own applications.

2015-05-27T15:15:55.000Z2015-05-27T15:15:55.000ZMatthew Kim

iOS Fundamentals: UIAlertView and UIAlertController

$
0
0

Even if you've only dipped your toes into the world of iOS development, you almost certainly know about UIAlertView. The UIAlertView class has a simple interface and is used to present modal alerts.

Apple has deprecated UIAlertView in iOS 8 though. As of iOS 8, it is recommended to use the UIAlertController class to present action sheets and modal alerts. In this quick tip, I will show you how easy it is to transition from UIAlertView to UIAlertController.

1. Project Setup

Launch Xcode 6.3+ and create a new project based on the Single View Application template.

Choose the Single View Application Template

Name the project Alerts, set Language to Swift, and set Devices to iPhone. Tell Xcode where you'd like to store the project files and click Create.

Configure the Project

Let's start by adding a button to trigger an alert view. Open Main.storyboard and add a button to the view controller's view. Set the button's title to Show Alert and add the necessary constraints to the button to keep it in place.

Create a Simple User Interface

Open ViewController.swift and add an action to the class implementation. Leave the action's implementation empty for the time being. Revisit Main.storyboard and connect the  view controller's showAlert action with the button's Touch Up Inside event.

2.UIAlertView

Let's start by showing an alert view using the UIAlertView class. As I mentioned, the interface of the UIAlertView class is very simple. The operating system takes care of the nitty gritty details. This is what the updated implementation of the showAlert action looks like.

The initialization is straightforward. We provide a title and a message, pass in a delegate object, a title for the cancel button, and titles for any other buttons we'd like to include.

The delegate object needs to conform to the UIAlertViewDelegate protocol. Because the  view controller will act as the alert view's delegate, the ViewController class needs to conform to the UIAlertViewDelegate protocol.

The methods of the UIAlertViewDelegate protocol are defined as optional. The method you'll use most often is alertView(_:clickedButtonAtIndex:). This method is invoked when the user taps one of the alert view's buttons. This is what the implementation of the alertView(_:clickedButtonAtIndex:) method could look like.

Build and run the application in the iOS Simulator to see if everything is working as expected.

3.UIAlertController

The interface of UIAlertController is very different from that of UIAlertView, but Apple's motivation to transition to the UIAlertController class makes sense once you've used it a few times. It's an elegant interface that will feel familiar.

The first benefit of using the UIAlertController class is the absence of a delegate protocol to handle user interaction. This means that we only need to update the implementation of the showAlert action. Take a look at the updated implementation below.

The initialization is pretty easy. We pass in a title, a message, and, most importantly, set the preferred style to UIAlertControllerStyle.Alert or .Alert for short. The preferred style tells the operating system if the alert controller needs to be presented as an action sheet, .ActionSheet, or a modal alert, .Alert.

Instead of providing titles for the buttons and handling user interaction through the UIAlertViewDelegate protocol, we add actions to the alert controller. Every action is an instance of the UIAlertAction class. Creating an UIAlertAction is simple. The initializer accepts a title, a style, and a handler. The style argument is of type UIAlertActionStyle. The handler is a closure, accepting the UIAlertAction object as its only argument.

The use of handlers instead of a delegate protocol makes the implementation of a modal alert more elegant and easier to understand. There's no longer a need for tagging alert views if you're working with multiple modal alerts.

Before we present the alert controller to the user, we add the two actions by calling addAction(_:) on the alertController object. Note that the order of the buttons of the modal alert is determined by the order in which the actions are added to the alert controller.

Because the UIAlertController class is a UIViewController subclass, presenting the alert controller to the user is as simple as calling presentViewController(_:animated:completion:), passing in the alert controller as the first argument.

4.UIActionSheet

Unsurprisingly, Apple also deprecated the UIActionSheet class and the UIActionSheetDelegate protocol. As of iOS 8, it is recommended to use the UIAlertController class to present an action sheet.

Presenting an action sheet is identical to presenting a modal alert. The only difference is the alert controller's preferredStyle property, which needs to be set to UIAlertControllerStyle.ActionSheet, or .ActionSheet for short, for action sheets.

Conclusion

Even though UIAlertView and UIActionSheet are deprecated in iOS 8, you can continue using them for the foreseeable future. The interface of the UIAlertController class, however, is a definite improvement. It adds simplicity and unifies the API for presenting modal alerts and action sheets. And because UIAlertController is a UIViewController subclass, the API will already feel familiar.

2015-05-29T16:45:39.000Z2015-05-29T16:45:39.000ZBart Jacobs

iOS Fundamentals: UIAlertView and UIAlertController

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-24038

Even if you've only dipped your toes into the world of iOS development, you almost certainly know about UIAlertView. The UIAlertView class has a simple interface and is used to present modal alerts.

Apple has deprecated UIAlertView in iOS 8 though. As of iOS 8, it is recommended to use the UIAlertController class to present action sheets and modal alerts. In this quick tip, I will show you how easy it is to transition from UIAlertView to UIAlertController.

1. Project Setup

Launch Xcode 6.3+ and create a new project based on the Single View Application template.

Choose the Single View Application Template

Name the project Alerts, set Language to Swift, and set Devices to iPhone. Tell Xcode where you'd like to store the project files and click Create.

Configure the Project

Let's start by adding a button to trigger an alert view. Open Main.storyboard and add a button to the view controller's view. Set the button's title to Show Alert and add the necessary constraints to the button to keep it in place.

Create a Simple User Interface

Open ViewController.swift and add an action to the class implementation. Leave the action's implementation empty for the time being. Revisit Main.storyboard and connect the  view controller's showAlert action with the button's Touch Up Inside event.

2.UIAlertView

Let's start by showing an alert view using the UIAlertView class. As I mentioned, the interface of the UIAlertView class is very simple. The operating system takes care of the nitty gritty details. This is what the updated implementation of the showAlert action looks like.

The initialization is straightforward. We provide a title and a message, pass in a delegate object, a title for the cancel button, and titles for any other buttons we'd like to include.

The delegate object needs to conform to the UIAlertViewDelegate protocol. Because the  view controller will act as the alert view's delegate, the ViewController class needs to conform to the UIAlertViewDelegate protocol.

The methods of the UIAlertViewDelegate protocol are defined as optional. The method you'll use most often is alertView(_:clickedButtonAtIndex:). This method is invoked when the user taps one of the alert view's buttons. This is what the implementation of the alertView(_:clickedButtonAtIndex:) method could look like.

Build and run the application in the iOS Simulator to see if everything is working as expected.

3.UIAlertController

The interface of UIAlertController is very different from that of UIAlertView, but Apple's motivation to transition to the UIAlertController class makes sense once you've used it a few times. It's an elegant interface that will feel familiar.

The first benefit of using the UIAlertController class is the absence of a delegate protocol to handle user interaction. This means that we only need to update the implementation of the showAlert action. Take a look at the updated implementation below.

The initialization is pretty easy. We pass in a title, a message, and, most importantly, set the preferred style to UIAlertControllerStyle.Alert or .Alert for short. The preferred style tells the operating system if the alert controller needs to be presented as an action sheet, .ActionSheet, or a modal alert, .Alert.

Instead of providing titles for the buttons and handling user interaction through the UIAlertViewDelegate protocol, we add actions to the alert controller. Every action is an instance of the UIAlertAction class. Creating an UIAlertAction is simple. The initializer accepts a title, a style, and a handler. The style argument is of type UIAlertActionStyle. The handler is a closure, accepting the UIAlertAction object as its only argument.

The use of handlers instead of a delegate protocol makes the implementation of a modal alert more elegant and easier to understand. There's no longer a need for tagging alert views if you're working with multiple modal alerts.

Before we present the alert controller to the user, we add the two actions by calling addAction(_:) on the alertController object. Note that the order of the buttons of the modal alert is determined by the order in which the actions are added to the alert controller.

Because the UIAlertController class is a UIViewController subclass, presenting the alert controller to the user is as simple as calling presentViewController(_:animated:completion:), passing in the alert controller as the first argument.

4.UIActionSheet

Unsurprisingly, Apple also deprecated the UIActionSheet class and the UIActionSheetDelegate protocol. As of iOS 8, it is recommended to use the UIAlertController class to present an action sheet.

Presenting an action sheet is identical to presenting a modal alert. The only difference is the alert controller's preferredStyle property, which needs to be set to UIAlertControllerStyle.ActionSheet, or .ActionSheet for short, for action sheets.

Conclusion

Even though UIAlertView and UIActionSheet are deprecated in iOS 8, you can continue using them for the foreseeable future. The interface of the UIAlertController class, however, is a definite improvement. It adds simplicity and unifies the API for presenting modal alerts and action sheets. And because UIAlertController is a UIViewController subclass, the API will already feel familiar.

2015-05-29T16:45:39.000Z2015-05-29T16:45:39.000ZBart Jacobs

Automating User Interface Testing on Android

$
0
0

Introduction

Android's Testing Support library includes the UI Automator framework, which can be used to perform automated black-box testing on Android apps. Introduced in API Level 18, the framework allows developers to simulate user actions on the widgets that constitute an app's user interface.

In this tutorial, I am going to show you how to use the framework to create and run a basic user interface test for the default Calculator app.

Prerequisites

To follow along, you need:

  • the latest build of Android Studio
  • a device or emulator that runs Android 4.3 or higher
  • a basic understanding of JUnit

1. Installing Dependencies

To use the UI Automator framework in your project, edit the build.gradle file in your project's app directory, adding the following dependencies:

The Sync Now button should be on the screen now. When you click it, you should see an error that looks like this:

Error while syncing project

Click the Install Repository and sync project link to install the Android Support Repository.

If you are using the appcompat-v7 library and its version is 22.1.1, you need to add the following dependency to ensure that both the app and the test app are using the same version of com.android.support:support-annotations:

Next, due to a bug in Android Studio, you need to exclude a file named LICENSE.txt using packagingOptions. Failing to do so will lead to the following error when you try to run a test:

Add the following snippet at the bottom of your build.gradle file:

2. Create a Test Class

Create a new test class, CalculatorTester, by creating a file named CalculatorTester.java inside the androidTest directory. To create a UI Automator test case, your class must extend InstrumentationTestCase.

CalculatorTesterjava should be inside androidTest

Press Alt+Insert and then click SetUp Method to override the setUp method.

Generate SetUp Method

Press Alt+Insert again and click Test Method to generate a new test method. Name this method testAdd. The CalculatorTester class should now look like this:

3. Inspect the Launcher's User Interface

Connect your Android device to your computer and press the home button on your device to navigate to the home screen.

Go back to your computer and use your a file explorer or terminal to browse to the directory where you installed the Android SDK. Next, enter the tools directory inside it and launch uiautomatorviewer. This will launch UI Automater Viewer. You should be presented with a screen that looks like this:

UI Automator Viewers interface

Click the button that looks like a phone to capture a screenshot of your Android device. Note that the screenshot you just captured is interactive. Click the Apps icon at the bottom. In the Node Detail section on the right, you're now able to see various details of your selection as shown below.

Apps icon details

To interact with items on the screen, the UI Automator testing framework needs to be able to uniquely identify them. In this tutorial, you will be using either the text, the content-desc, or the class of the item to uniquely identify it.

As you can see, the Apps icon doesn't have any text, but it does have a content-desc. Make a note of its value, because you will be using it in the next step.

Pick your Android device up and touch the Apps icon to navigate to the screen that shows the apps installed on the device. Head back to UI Automater Viewer and capture another screenshot. Since you will be writing a test for the Calculator app, click its icon to look at its details.

Calculator icon details

This time the content-desc is empty, but the text contains the value Calculator. Make a note of this as well.

If your Android device is running a different launcher or a different version of Android, the screens and the node details will be different. This also means that you will have to make some changes in your code to match the operating system.

4. Prepare the Test Environment

Return to Android Studio to add code to the setUp method. As its name suggests, the setUp method should be used to prepare your test environment. In other words, this is where you specify what needs to be done before running the actual test.

You will now be writing code to simulate what you did on your Android device in the previous step:

  1. Press the home button to go to the home screen.
  2. Press the Apps icon to view all apps.
  3. Launch the Calculator app by tapping its icon.

In your class, declare a field of type UiDevice and name it device. This field represents your Android device and you will be using it to simulate user interaction.

In the setUp method, initialize device by invoking the UiDevice.getInstance method, passing in a Instrumentation instance as shown below.

To simulate pressing the home button of the device, invoke the pressHome method.

Next, you need to simulate a click event on the Apps icon. You can't do this immediately though, because the Android device will need a moment to navigate to the home screen. Trying to click the Apps icon before it is visible on the screen will cause a runtime exception.

To wait for something to happen, you need to call the wait method on the UiDevice instance. To wait for the Apps icon to show up on the screen, use the Until.hasObject method.

To identify the Apps icon, use the By.desc method and pass the value Apps to it. You also need to specify the maximum duration of the wait in milliseconds. Set it to 3000. This results in the following code block:

To get a reference to the Apps icon, use the findObject method. Once you have a reference to the Apps icon, invoke the click method to simulate a click.

Like before, we need to wait a moment for the Calculator icon to show up on the screen. In the previous step, you saw that the Calculator icon can be uniquely identified by its text field. We invoke the By.text method to find the icon, passing in Calculator.

Use the findObject and click methods to obtain a reference to the Calculator icon and simulator a click.

5. Inspect the Calculator's User Interface

Launch the Calculator app on your Android device and use UI Automater Viewer to inspect it. After capturing a screenshot, click the buttons to see how you can uniquely identify them.

For this test case, you will be making the calculator calculate the value of 9+9= and check if it shows 18 as the result. This means that you need to know how to identify the buttons with the labels 9, +, and =.

Inspecting the UI of the calculator app

On my device, here's what I gathered from the inspection:

  • The buttons containing the digits have matching text values.
  • The buttons containing the + and = symbols have the content-desc values set to plus and equals respectively.
  • The result is shown in an EditText widget.

Note that these values might be different on your device if you are using a different version of the Calculator app.

6. Create the Test

In the previous steps, you already learned that you can use the findObject method along with either By.text or By.desc to get a reference to any object on the screen. You also know that you have to use the click method to simulate a click on the object. The following code uses these methods to perform the calculation 9+9=. Add it to the testAdd method of the CalculatorTester class.

At this point, you have to wait for the result. However, you can't use Until.hasObject here because the EditText containing the result is already on the screen. Instead, you have to use the waitForIdle method to wait for the calculation to complete. Again, the maximum duration of the wait can be 3000 ms.

Get a reference to the EditText object using the findObject and By.clazz methods. Once you have the reference, call the getText method to determine the result of the calculation.

Finally, use assertTrue to verify that the result is equal to 18.

Your test is now complete.

6. Run the Test

To run the test, in the toolbar of Android Studio, select the class CalculatorTester from the drop-down and click the play button on its right.

Select CalculatorTester and press play

Once the build finishes, the test should run and complete successfully. While the test runs, you should be able to see the UI automation running on your Android device.

Test Results

Conclusion

In this tutorial, you have learned how to use the UI Automator testing framework and the UI Automater Viewer to create user interface tests. You also saw how easy it is to run the test using Android Studio. Even though we tested a rather simple app, you can apply the concepts you learned here to test almost any Android app.

You can learn more about the testing support library on the Android Developers website.

2015-06-01T17:30:40.000Z2015-06-01T17:30:40.000ZAshraff Hathibelagal

Automating User Interface Testing on Android

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-23969

Introduction

Android's Testing Support library includes the UI Automator framework, which can be used to perform automated black-box testing on Android apps. Introduced in API Level 18, the framework allows developers to simulate user actions on the widgets that constitute an app's user interface.

In this tutorial, I am going to show you how to use the framework to create and run a basic user interface test for the default Calculator app.

Prerequisites

To follow along, you need:

  • the latest build of Android Studio
  • a device or emulator that runs Android 4.3 or higher
  • a basic understanding of JUnit

1. Installing Dependencies

To use the UI Automator framework in your project, edit the build.gradle file in your project's app directory, adding the following dependencies:

The Sync Now button should be on the screen now. When you click it, you should see an error that looks like this:

Error while syncing project

Click the Install Repository and sync project link to install the Android Support Repository.

If you are using the appcompat-v7 library and its version is 22.1.1, you need to add the following dependency to ensure that both the app and the test app are using the same version of com.android.support:support-annotations:

Next, due to a bug in Android Studio, you need to exclude a file named LICENSE.txt using packagingOptions. Failing to do so will lead to the following error when you try to run a test:

Add the following snippet at the bottom of your build.gradle file:

2. Create a Test Class

Create a new test class, CalculatorTester, by creating a file named CalculatorTester.java inside the androidTest directory. To create a UI Automator test case, your class must extend InstrumentationTestCase.

CalculatorTesterjava should be inside androidTest

Press Alt+Insert and then click SetUp Method to override the setUp method.

Generate SetUp Method

Press Alt+Insert again and click Test Method to generate a new test method. Name this method testAdd. The CalculatorTester class should now look like this:

3. Inspect the Launcher's User Interface

Connect your Android device to your computer and press the home button on your device to navigate to the home screen.

Go back to your computer and use your a file explorer or terminal to browse to the directory where you installed the Android SDK. Next, enter the tools directory inside it and launch uiautomatorviewer. This will launch UI Automater Viewer. You should be presented with a screen that looks like this:

UI Automator Viewers interface

Click the button that looks like a phone to capture a screenshot of your Android device. Note that the screenshot you just captured is interactive. Click the Apps icon at the bottom. In the Node Detail section on the right, you're now able to see various details of your selection as shown below.

Apps icon details

To interact with items on the screen, the UI Automator testing framework needs to be able to uniquely identify them. In this tutorial, you will be using either the text, the content-desc, or the class of the item to uniquely identify it.

As you can see, the Apps icon doesn't have any text, but it does have a content-desc. Make a note of its value, because you will be using it in the next step.

Pick your Android device up and touch the Apps icon to navigate to the screen that shows the apps installed on the device. Head back to UI Automater Viewer and capture another screenshot. Since you will be writing a test for the Calculator app, click its icon to look at its details.

Calculator icon details

This time the content-desc is empty, but the text contains the value Calculator. Make a note of this as well.

If your Android device is running a different launcher or a different version of Android, the screens and the node details will be different. This also means that you will have to make some changes in your code to match the operating system.

4. Prepare the Test Environment

Return to Android Studio to add code to the setUp method. As its name suggests, the setUp method should be used to prepare your test environment. In other words, this is where you specify what needs to be done before running the actual test.

You will now be writing code to simulate what you did on your Android device in the previous step:

  1. Press the home button to go to the home screen.
  2. Press the Apps icon to view all apps.
  3. Launch the Calculator app by tapping its icon.

In your class, declare a field of type UiDevice and name it device. This field represents your Android device and you will be using it to simulate user interaction.

In the setUp method, initialize device by invoking the UiDevice.getInstance method, passing in a Instrumentation instance as shown below.

To simulate pressing the home button of the device, invoke the pressHome method.

Next, you need to simulate a click event on the Apps icon. You can't do this immediately though, because the Android device will need a moment to navigate to the home screen. Trying to click the Apps icon before it is visible on the screen will cause a runtime exception.

To wait for something to happen, you need to call the wait method on the UiDevice instance. To wait for the Apps icon to show up on the screen, use the Until.hasObject method.

To identify the Apps icon, use the By.desc method and pass the value Apps to it. You also need to specify the maximum duration of the wait in milliseconds. Set it to 3000. This results in the following code block:

To get a reference to the Apps icon, use the findObject method. Once you have a reference to the Apps icon, invoke the click method to simulate a click.

Like before, we need to wait a moment for the Calculator icon to show up on the screen. In the previous step, you saw that the Calculator icon can be uniquely identified by its text field. We invoke the By.text method to find the icon, passing in Calculator.

Use the findObject and click methods to obtain a reference to the Calculator icon and simulator a click.

5. Inspect the Calculator's User Interface

Launch the Calculator app on your Android device and use UI Automater Viewer to inspect it. After capturing a screenshot, click the buttons to see how you can uniquely identify them.

For this test case, you will be making the calculator calculate the value of 9+9= and check if it shows 18 as the result. This means that you need to know how to identify the buttons with the labels 9, +, and =.

Inspecting the UI of the calculator app

On my device, here's what I gathered from the inspection:

  • The buttons containing the digits have matching text values.
  • The buttons containing the + and = symbols have the content-desc values set to plus and equals respectively.
  • The result is shown in an EditText widget.

Note that these values might be different on your device if you are using a different version of the Calculator app.

6. Create the Test

In the previous steps, you already learned that you can use the findObject method along with either By.text or By.desc to get a reference to any object on the screen. You also know that you have to use the click method to simulate a click on the object. The following code uses these methods to perform the calculation 9+9=. Add it to the testAdd method of the CalculatorTester class.

At this point, you have to wait for the result. However, you can't use Until.hasObject here because the EditText containing the result is already on the screen. Instead, you have to use the waitForIdle method to wait for the calculation to complete. Again, the maximum duration of the wait can be 3000 ms.

Get a reference to the EditText object using the findObject and By.clazz methods. Once you have the reference, call the getText method to determine the result of the calculation.

Finally, use assertTrue to verify that the result is equal to 18.

Your test is now complete.

6. Run the Test

To run the test, in the toolbar of Android Studio, select the class CalculatorTester from the drop-down and click the play button on its right.

Select CalculatorTester and press play

Once the build finishes, the test should run and complete successfully. While the test runs, you should be able to see the UI automation running on your Android device.

Test Results

Conclusion

In this tutorial, you have learned how to use the UI Automator testing framework and the UI Automater Viewer to create user interface tests. You also saw how easy it is to run the test using Android Studio. Even though we tested a rather simple app, you can apply the concepts you learned here to test almost any Android app.

You can learn more about the testing support library on the Android Developers website.

2015-06-01T17:30:40.000Z2015-06-01T17:30:40.000ZAshraff Hathibelagal

WatchKit Navigation, Transitions, and Contexts

$
0
0

Introduction

Apple's WatchKit framework for developing Apple Watch applications provides several ways for you, as a developer, to present different types of interfaces to users of your app. This includes page-based, hierarchal, and modal interfaces, which can all use contexts to create dynamic content.

In this tutorial, I am going to show you how to set up and manipulate each interface type, and what use cases they are each designed for.

Requirements

This tutorial requires that you are running Xcode 6.2+ and are comfortable with creating a basic Apple Watch app. If not, please read some of the other WatchKit tutorials on Tuts+ and then come back to this one. You will also need to download the starter project from GitHub.

1. Page-Based Interfaces

The first kind of interface you are going to implement in your Apple Watch app will be a page-based one. These kinds of interfaces function very similarly to the standard home screen on an iOS device to show multiple pages of information in a set order. Page-based interfaces are best suited for when you need to display multiple screens of information that are related to each other.

Open the starter project in Xcode and navigate to Interface.storyboard. The storyboard already contains six interface controllers as you an see below.

Initial storyboard

To create a page-based interface, you need to create a next page relationship segue between the interface controllers you want to link. Press the Control button on your keyboard and click and drag from one interface controller to another. Control and drag from the first interface controller to the second and, if done correctly, a Relationship Segue pop-up should appear. From this pop-up menu, choose the next page option as shown below.

Setting up next page segue

Follow these same steps to link the second interface controller to the third one. The storyboard should now show the segues between the top three interface controllers. Note that the order in which you create these segues determines the order the interfaces will appear in your WatchKit app.

Page-based segues

Build and run your app, and open an Apple Watch as an external display in the iOS Simulator. You will see that the app displays the First Page interface controller and has three dots at the bottom, representing the three available pages. You can swipe between the three pages by swiping left or right, just as you would on an iOS device.

Initial app screen First page
Initial app screen Third page

When using a page-based interface, you can specify which interface controller you want to appear at launch. This is done by using the becomeCurrentPage method. Open SecondPageInterfaceController.swift and add the following line to the awakeWithContext(_:) method:

Build and run your app again, and you will see that the second page is now presented on launch.

Second page appears first

At runtime, you can also specify an explicit order in which to show the pages of your interface. This is done by using the reloadRootControllersWithNames(_:contexts:) class method.

The first parameter of this method is an array of strings that contains the storyboard identifiers of the interface controllers you want to load. The order of identifiers in this array determines the order that the pages appear in.

The second parameter is an optional AnyObject type array that contains the contexts for each of the pages. You will learn about contexts later in this tutorial. For now, you will just leave this parameter as nil. Replace the line you just added to your awakeWithContext(_:) method with the following:

Build and run your app, and you will see that, after the loading has completed, your app will show the third page followed by the first page.

Third page is now first
First page is now last

2. Hierarchal Interfaces

In addition to page-based interfaces, you can also implement hierarchal interfaces in an Apple Watch app. We speak of hierarchical interfaces when transitioning between interface controllers using a push transition.

The behavior of a hierarchal interface is similar to that of the UINavigationController class in an iOS app. This type of Apple Watch interface is best suited for presenting multiple interfaces one after another in a linear fashion.

Revisit Interface.storyboard and drag the Main Entry Point arrow to the Transition interface controller as shown below. This will make the specified interface controller appear first when the app is launched.

Changing entry point

Next, open TransitionInterfaceController.swift and add the following line in the pushButtonPressed method:

Similar to the reloadRootControllersWithNames(_:contexts:) method that you used earlier, the first parameter of pushControllerWithName(_:context:) is the storyboard identifier of the interface controller you want to push. The second parameter is the context for this new interface controller.

Build and run your app. You should see the following interface when your WatchKit app has finished launching.

Transition interface

Tapping the Hierarchal button should push the next interface onto the screen as shown below.

Pushing the next interface onto the stack

You will notice that there is now a small arrow in the top left corner of the screen. Tapping the arrow will take you back to the previous interface. It's also possible to pop the current interface controller in code. In the HierarchalInterfaceController class, update the popButtonPressed method as follows:

Build and run your app again. Tapping the Pop button should now have the same effect as pressing the back arrow in the top left.

Alternatively, if you want to return to the very first interface in the hierarchy, you invoke the popToRootController method rather than the popController method. For your current app, these methods would both produce the same result as there are only two interfaces in the hierarchy at the moment.

3. Modal Interfaces

Modal interfaces function similarly to hierarchal interfaces. The major difference between the two is that modal interfaces are designed to display interfaces on top of one another rather than transitioning between them in a linear fashion.

Head back to TransitionInterfaceController.swift and add the following line of code to the modalButtonPressed method:

To dismiss the modal interface, update the dismissButtonPressed method as follows in the ModalInterfaceController:

Build and run your app. Tap the Modal button to present a modal interface.

Modal interface

An advantage of modal interfaces is that you can modally present a page-based interface. This is done by using the presentControllersWithNames(_:contexts:) method. The first parameter is an array of storyboard identifiers and the second parameter is an array of context objects. In TransitionInterfaceController.swift, update the implementation of the modalButtonPressed method as follows:

Run your app and tap the Modal button. A page-based interface should be presented modally with the following two interfaces:

Modal page-based interface 1
Modal page-based interface 2

4. Interface Contexts

As you have seen from the various methods used in this tutorial so far, when transitioning to a new interface you can pass in a context to configure the interface that's about to be presented. The context you pass to your new interface is optional and can be any data type  (AnyObject?).

This means that you can pass any type of data between interfaces, from simple numbers to complex data structures. The context is handed to the new interface in the awakeWithContext(_:) method. The advantage of passing a context to an interface controller is to configure its contents dynamically, that is, at runtime.

Open TransitionInterfaceController.swift and update the implementation of the modalButtonPressed method as follows:

In ModalInterfaceController.swift, update the implementation of the awakeWithContext(_:) as follows:

We use optional binding to see if the context provided can be cast into a String. If it can, we set the button's title to that value.

Build and run your app, and open the modal interface. The title of the button should have changed to Custom Text.

Interface using contexts

Learn More in Our WatchKit Course

If you're interested in taking your WatchKit education to the next level, you can take a look at our full course on WatchKit development.

Conclusion

In this tutorial, you learned how to set up and utilize the three main interface types available to WatchKit applications, page-based, hierarchal, and modal. You also learned how to use interface contexts to configure interface controllers at runtime. You now also know when it is best to use each of these interface types in your WatchKit applications. You can read more about interface navigation in Apple's documentation.

2015-06-03T17:55:20.250Z2015-06-03T17:55:20.250ZDavis Allie

WatchKit Navigation, Transitions, and Contexts

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-23938

Introduction

Apple's WatchKit framework for developing Apple Watch applications provides several ways for you, as a developer, to present different types of interfaces to users of your app. This includes page-based, hierarchal, and modal interfaces, which can all use contexts to create dynamic content.

In this tutorial, I am going to show you how to set up and manipulate each interface type, and what use cases they are each designed for.

Requirements

This tutorial requires that you are running Xcode 6.2+ and are comfortable with creating a basic Apple Watch app. If not, please read some of the other WatchKit tutorials on Tuts+ and then come back to this one. You will also need to download the starter project from GitHub.

1. Page-Based Interfaces

The first kind of interface you are going to implement in your Apple Watch app will be a page-based one. These kinds of interfaces function very similarly to the standard home screen on an iOS device to show multiple pages of information in a set order. Page-based interfaces are best suited for when you need to display multiple screens of information that are related to each other.

Open the starter project in Xcode and navigate to Interface.storyboard. The storyboard already contains six interface controllers as you an see below.

Initial storyboard

To create a page-based interface, you need to create a next page relationship segue between the interface controllers you want to link. Press the Control button on your keyboard and click and drag from one interface controller to another. Control and drag from the first interface controller to the second and, if done correctly, a Relationship Segue pop-up should appear. From this pop-up menu, choose the next page option as shown below.

Setting up next page segue

Follow these same steps to link the second interface controller to the third one. The storyboard should now show the segues between the top three interface controllers. Note that the order in which you create these segues determines the order the interfaces will appear in your WatchKit app.

Page-based segues

Build and run your app, and open an Apple Watch as an external display in the iOS Simulator. You will see that the app displays the First Page interface controller and has three dots at the bottom, representing the three available pages. You can swipe between the three pages by swiping left or right, just as you would on an iOS device.

Initial app screen First page
Initial app screen Third page

When using a page-based interface, you can specify which interface controller you want to appear at launch. This is done by using the becomeCurrentPage method. Open SecondPageInterfaceController.swift and add the following line to the awakeWithContext(_:) method:

Build and run your app again, and you will see that the second page is now presented on launch.

Second page appears first

At runtime, you can also specify an explicit order in which to show the pages of your interface. This is done by using the reloadRootControllersWithNames(_:contexts:) class method.

The first parameter of this method is an array of strings that contains the storyboard identifiers of the interface controllers you want to load. The order of identifiers in this array determines the order that the pages appear in.

The second parameter is an optional AnyObject type array that contains the contexts for each of the pages. You will learn about contexts later in this tutorial. For now, you will just leave this parameter as nil. Replace the line you just added to your awakeWithContext(_:) method with the following:

Build and run your app, and you will see that, after the loading has completed, your app will show the third page followed by the first page.

Third page is now first
First page is now last

2. Hierarchal Interfaces

In addition to page-based interfaces, you can also implement hierarchal interfaces in an Apple Watch app. We speak of hierarchical interfaces when transitioning between interface controllers using a push transition.

The behavior of a hierarchal interface is similar to that of the UINavigationController class in an iOS app. This type of Apple Watch interface is best suited for presenting multiple interfaces one after another in a linear fashion.

Revisit Interface.storyboard and drag the Main Entry Point arrow to the Transition interface controller as shown below. This will make the specified interface controller appear first when the app is launched.

Changing entry point

Next, open TransitionInterfaceController.swift and add the following line in the pushButtonPressed method:

Similar to the reloadRootControllersWithNames(_:contexts:) method that you used earlier, the first parameter of pushControllerWithName(_:context:) is the storyboard identifier of the interface controller you want to push. The second parameter is the context for this new interface controller.

Build and run your app. You should see the following interface when your WatchKit app has finished launching.

Transition interface

Tapping the Hierarchal button should push the next interface onto the screen as shown below.

Pushing the next interface onto the stack

You will notice that there is now a small arrow in the top left corner of the screen. Tapping the arrow will take you back to the previous interface. It's also possible to pop the current interface controller in code. In the HierarchalInterfaceController class, update the popButtonPressed method as follows:

Build and run your app again. Tapping the Pop button should now have the same effect as pressing the back arrow in the top left.

Alternatively, if you want to return to the very first interface in the hierarchy, you invoke the popToRootController method rather than the popController method. For your current app, these methods would both produce the same result as there are only two interfaces in the hierarchy at the moment.

3. Modal Interfaces

Modal interfaces function similarly to hierarchal interfaces. The major difference between the two is that modal interfaces are designed to display interfaces on top of one another rather than transitioning between them in a linear fashion.

Head back to TransitionInterfaceController.swift and add the following line of code to the modalButtonPressed method:

To dismiss the modal interface, update the dismissButtonPressed method as follows in the ModalInterfaceController:

Build and run your app. Tap the Modal button to present a modal interface.

Modal interface

An advantage of modal interfaces is that you can modally present a page-based interface. This is done by using the presentControllersWithNames(_:contexts:) method. The first parameter is an array of storyboard identifiers and the second parameter is an array of context objects. In TransitionInterfaceController.swift, update the implementation of the modalButtonPressed method as follows:

Run your app and tap the Modal button. A page-based interface should be presented modally with the following two interfaces:

Modal page-based interface 1
Modal page-based interface 2

4. Interface Contexts

As you have seen from the various methods used in this tutorial so far, when transitioning to a new interface you can pass in a context to configure the interface that's about to be presented. The context you pass to your new interface is optional and can be any data type  (AnyObject?).

This means that you can pass any type of data between interfaces, from simple numbers to complex data structures. The context is handed to the new interface in the awakeWithContext(_:) method. The advantage of passing a context to an interface controller is to configure its contents dynamically, that is, at runtime.

Open TransitionInterfaceController.swift and update the implementation of the modalButtonPressed method as follows:

In ModalInterfaceController.swift, update the implementation of the awakeWithContext(_:) as follows:

We use optional binding to see if the context provided can be cast into a String. If it can, we set the button's title to that value.

Build and run your app, and open the modal interface. The title of the button should have changed to Custom Text.

Interface using contexts

Learn More in Our WatchKit Course

If you're interested in taking your WatchKit education to the next level, you can take a look at our full course on WatchKit development.

Conclusion

In this tutorial, you learned how to set up and utilize the three main interface types available to WatchKit applications, page-based, hierarchal, and modal. You also learned how to use interface contexts to configure interface controllers at runtime. You now also know when it is best to use each of these interface types in your WatchKit applications. You can read more about interface navigation in Apple's documentation.

2015-06-03T17:55:20.250Z2015-06-03T17:55:20.250ZDavis Allie

Google I/O 2015 Aftermath

$
0
0

Every year developers sit on the edge of their seats waiting for Google I/O to come along and wow us with the introduction of new features, services, and development tools. Last year Google focused on revolutions by introducing new form factors, such as Android Wear, and Material Design.

This year, Google took the necessary steps of focusing on enhancements to the Android operating system and providing developers with the tools they need to build better applications. On top of this, they introduced some interesting new tech to boot.

Let's take a few minutes to go over what was discussed at the conference, what's available right now, and what will be coming out over the next few months.

1. Android

Arguably the largest focus this year at Google I/O was the Android platform. First and foremost is the announcement of the Android M developer preview, following their earlier precedent from Lollipop of releasing beta versions of the operating system for developers.

Google also announced that they are working with manufacturers to move towards a standard bidirectional USB-C device connector, granting new Android devices the ability to charge three to five times faster.

Continuing their recent focus on efficiency and power consumption, Google announced new APIs and features of the operating system meant to conserve device battery. In addition, Google introduced a plethora of improvements to the Play Store to help engage users and tools for developers to build apps through use of support libraries and new APIs.

Android M Developer Preview

The biggest piece of news coming from Google I/O this year was the announcement of a new version of the Android operating system, Android M, which will be released during Q3 of this year.

M is an enhancement to the current Lollipop operating system and focuses on polish and software quality. This newest iteration of Android includes thousands of bug fixes from Lollipop, new hardware APIs, and improved power management.

Google will be releasing multiple updates to the M preview with bug fixes and additional features, roughly once a month until the official release. The Android M developer preview is currently available for the Nexus 5, 6, 9, and Player.

Devices that support the M preview

Doze

First discussed during the Google I/O Keynote, Doze is a new feature of Android M that uses significant motion detection to determine if a device is being used. If it isn't, then the operating system exponentially backs off network activity to conserve battery while the device is idle.

While a device is in doze mode, it can still wake itself up to respond to alarms and high priority notifications. According to Google, idling two Nexus 9 tablets, one running Android Lollipop and the other running Android M, resulted in the M device battery lasting twice as long.

Android M APIs

With every Android update comes a new set of APIs developers can use to improve their apps. While this list of new APIs is much shorter this year, they are nonetheless impressive.

Many of the new APIs are focused on hardware, such as enhanced authentication using fingerprint scanners, improved stylus support for buttons and gestures, and a 4K display mode. Google has even introduced an API that deals with voice interactions, allowing applications to communicate with users through conversation.

Additional APIs available in Android M are focused on user engagement. One such API is direct sharing, which lets users share information about an app with specific targets, such as email or hangouts contacts.

Another API, App Links, lets devices automatically associate web URLs with a verified application, rather than having to go through an app selection dialog. Even more powerful is the Assist API. Using Assist, you will be able to implement contextually aware Google Now functionality directly into your app. Assist bases results on the content being displayed to the user. The goal is to provide answers and possible actions to the user as they're needed.

In addition to this, Google has added a feature known as Now on Tap. Now on Tap allows users to hold down the device's home button to generate Now cards based on in-app information.

Contextually Aware Google Now in App

Runtime Permissions

For years, users have asked for a solution to the all-or-nothing approach of app permissions in Android. Starting with M, Google has introduced runtime permissions for applications.

Instead of requiring users to accept all permissions at install time, a dialog will prompt users to allow or deny a permission when it is required. If the user denies the permission, the requesting process will terminate and the application will need to fall back on a contingency.

To help users, Android has regrouped permissions into a set of easy to understand categories. It is important to note that these permission categories can be denied or allowed at any time through the device's settings screen.

Example of a permissions dialog

Play Services 7.5

In addition to the M preview, Google has rolled out version 7.5 of Play Services. Luckily, this library includes a lot of new and interesting features. Last year, the JobScheduler API was released, allowing developers to batch operations when certain conditions were met by the system in order to save battery. The downside of the API is that it only ran on Lollipop. With this new version of Play Services, Google has introduced the GcmNetworkManager, which is essentially a backwards compatible JobScheduler that falls back to the JobScheduler when it is available.

Other useful additions include:

  • Google Cloud Messages that can be subscribed to and filtered by topic.
  • App Invites allow users to send an install link directly to their friends.
  • Google Cast remote displays let users view different content on their device and another screen, such as a television.
  • The Google Maps API can now run on Android Wear devices.
  • Google improved Google Fit data and added dozens of newly supported workout exercises.

Design Support Library

Alongside the Play Services update, Google introduced the Design Support Library. Using this support library, developers are now able to implement various user interface components back to API 7, which were previously only available in Android Lollipop or through third-party libraries.

Some of the components available include floating action buttons, navigation drawer headers, and a new container called the CoordinatorLayout, which automatically moves views as other views change size or visibility.

Example of a navigation drawer with a header

Play Store Enhancements

During this year's Keynote, Google also announced a number of new features for the Play Store. One set of improvements revolves around providing statistics to help increase app download rates.

In the updated Google Play Developers Console, developers will now be able to view how many users have looked at their application in the store compared to how many have committed to installing. Developers will also be able to use Experiments, a service that allows them to try variants of their app store listing to see what changes may drive more downloads.

In addition to the application's store listing, developers will be able to create custom Google Play home pages for displaying all of their applications as well as some information about the developer or the company.

Another set of improvements is focused on what content is displayed when users search through the Play Store. The store can now be more aware of specific apps that match a user's search criteria and, when a search is vague, the user will be provided with a set of categories with apps that may meet their needs.

The last major change to the Play Store is a shift in how apps for children and families are found. Apps can now have a rating and a label that indicate whether the content is suitable for families. Users can also search for apps while filtering by specific age groups. Alongside traditional methods of finding apps, the Play Store has also introduced a character search feature, allowing parents to search for apps based on their children's favorite book, movie or cartoon characters.

Updated Developer Console Showing Views vs Installs

Android Pay

Confirming the rumors that had been floating around before Google I/O, Google launched a new service called Android Pay. Using NFC, Android Pay allows users to perform transactions in over 700,000 retail locations that accept contactless payments.

Android Pay keeps security in the forefront by using a virtual account number for transactions, rather than sharing the user's actual card number. Likewise, users on Android M will have an additional layer of security available through the use of hardware fingerprint scanners.

Pay can also be integrated into applications, allowing users to quickly and easily purchase goods from their device. Android Pay will be supported on any device with NFC, back to Android KitKat.

Android Pay Console

Android Development Tools

Two years ago, Google introduced the first beta of Android Studio, and since then they have been continuously improving the IDE to make the lives of developers that much easier.

This year was no exception. Google released Android Studio 1.3 on the canary channel. The newest version includes great features, such as faster gradle build speeds, a new memory profiler, new support annotations, and the ability to bind data models with views through XML layout files.

Android Studio has also added one of the most requested development features, better native development support. Full editing and debugging support with error correction, code completion and debugging for C++ applications are now available for developers using the NDK.

C Debugging in Android Studio

While the tools for building applications have been improved, Google has also added a new service, following their acquisition of Appurify, called Cloud Test Lab. Using Cloud Test Lab, developers can upload their application and Google will run tests on the top 20 most popular Android devices. After completing the tests, Cloud Test Lab will deliver a free report, containing crash logs and a video of the running application.

2. Google Photos

One of the more exciting announcements at Google I/O was the introduction of Google Photos. Breaking away from Google+, Photos is available for iOS, Android, and the web. The service automatically categorizes images and creates collections based on timelines and albums, helping to help organize content.

Searching has been improved by giving users the ability to quickly browse by day, month, or year. Not only does Google Photos store images, it also allows users to perform basic photo editing, create movies, collages, and animations from their pictures. Best of all, Google will store your photos in high resolution for free with unlimited storage.

3. Cardboard

Since Cardboard was announced at Google I/O 2014, over one million viewers have been assembled. This year, Google has made some simple updates to their VR headset, such as adding a button that is usable with more devices and changing the dimensions to support any phone up to six inches.

Google has also updated the Unity SDK to support iOS devices and the company released a version of the viewer application to the Apple's App Store. While the Cardboard headset is interesting in itself, what Google has planned to do with it is what's really magical.

Expeditions

As a part of the Google in Education initiative, Google has introduced a new program called Expeditions. Through Expeditions, preassembled kits with Cardboard headsets, phones, and an instructor tablet will be sent to classrooms to allow children to experience simulated field trips.

Expeditions will consist of high definition, 360 degree videos of locations across the globe. While on these field trips, teachers will be able to discuss the scene the children are seeing and teach them in a more fun and interactive way.

Children viewing Expeditions using Cardboard

Jump

To create the high quality, 360 degree videos that Expeditions and Cardboard will require, Google has created a system it dubs Jump. Jump consists of three parts. The first part is a physical array of sixteen cameras collaborated to film in all angles with multiple points of intersection. While Google will release the schematics for building a Jump rig from scratch, they have also partnered with GoPro to sell one that is preassembled.

GoPro Jump Array

The second part of the Jump system is known as the Assembler. Using powerful computers in the cloud, content from a Jump rig can be uploaded and processed in order to smooth images, balance colors, and create stereoscopic VR videos. When this service is first turned on this summer, it will only be available to select creators until the official launch later this year.

The third and final part of Jump is getting this content to users. This is done by adding support for VR videos on YouTube. This means that anyone with a Cardboard headset can begin accessing it this summer.

4. Internet of Things

With the acquisition of Nest last year, Google has started work on getting involved with the Internet of Things (IoT). As they pointed out during the keynote presentation, one of the biggest challenges facing the Internet of Things right now is the lack of uniform software and communication between devices.

To help overcome this challenge, they've introduced two new technologies, Brillo and Weave. Brillo is an underlying operating system for IoT devices, derived from Android and polished down to be lightweight while still supporting features such as bluetooth communication.

Weave is a common language, similar to JSON, for devices that need to communicate with each other, be it devices in the cloud, a phone, or IoT hardware. As of right now, information is sparse on both Brillo and Weave. Brillo, however, should be released in Q3 of 2015, and Weave will be available by Q4 of this year with some information coming throughout the year.

5. Project Jacquard

As other items in the world become more connected, it makes sense that the concept of wearables will shift to also include clothing. Project Jacquard revolves around the use of strong conductive fabrics that can withstand the strain of industrial looms. The goal of Project Jacquard is to allow innovators to design and create furniture and clothing that can react to touches and gestures to control other electronics embedded in the fabric.

6. Optimization for Lower End Devices

To improve the experience for users across the world, Google has adopted multiple techniques to make browsing from a mobile device more efficient. Chrome is currently using an optimized search page for fourteen countries to support slower connections, such as 2G.

When Chrome is optimized, web pages load four times faster and use 80% less data. Taking the lessons learned from these fourteen countries, Google plans to use optimized web pages across the globe for lower end devices as determined by their new Network Quality Estimator tool.

Other techniques used include displaying a stock thumbnail instead of download images for the user so that they are not using data unnecessarily, and allowing users to save web pages for offline use.

Recently YouTube has begun testing offline video support up to 48 hours in four countries so that videos can be viewed later without an active network connection. Likewise, Google Maps is in the process of allowing offline maps and step-by-step directions, which will be available later this year.

Network Quality Estimator at work

7. More Development Tools

Polymer has finally been officially released as version 1.0. This milestone release includes new features, such the ability to drop in common features like charts and toolbars, and a fast data-binding system.

For iOS developers, Google announced that they are adopting CocoaPods as the primary method to deliver their SDK to iOS developers. 

Conclusion

This year, like most, Google I/O covered a lot of ground. From education with Cardboard to stepping into the Internet of Things arena, Google continues to demonstrate that they are a versatile company that is invested in the development community. We live in interesting times and they're only going to get more interesting.

2015-06-05T18:05:51.000Z2015-06-05T18:05:51.000ZPaul Trebilcox-Ruiz
Viewing all 1836 articles
Browse latest View live