Quantcast
Channel: Envato Tuts+ Code - Mobile Development
Viewing all 1836 articles
Browse latest View live

Introduction to iOS Testing With UI Automation

$
0
0

Just imagine being able to write scripts that automatically interact with your iOS application and be able to verify the results. With UI Automation you can. UI Automation is a tool provided by Apple to perform a higher level of testing on your iOS application beyond anything achievable with XCTest.

1. White Box versus Black Box Testing

You might have heard the comparison of white box testing versus black box testing with regard to how one might test a piece of software. If you're not familiar with these concepts, let me explain how they work.

White Box Testing

Imagine there's a piece of software running inside a box. With white box testing, you can see inside the box and look at all the gritty pieces of how the software works, and then make educated decisions on how to test the software. You can also have deeper level hooks into the software from the tests that you write.

Unit testing is white box testing. When writing unit tests, the tester has fine-grained access to the code under test. The tester can actually write tests that leverage the software under test at the method, or unit, level.

In iOS software development we use the XCTest framework to perform this type of testing. Have a look at another tutorial I wrote on getting started with XCTest.

Black Box Testing

In black box testing, the box is opaque. The tester cannot see inside the box. The tester cannot access and doesn't know about the implementation of the code base to write tests. Instead, the tester is forced to use the application as an end user would by interacting with the application and waiting for its response, verifying the results.

There are at least two ways to execute this type of testing.

  • A tester who repeatedly and manually performs a number of predefined steps and visually verifies the results.
  • Use specialized tools to test the application with APIs that behave similar to how a human interacts.

In iOS application development, Apple provides a tool called UI Automation to perform black box testing.

2. What is UI Automation? 

UI Automation is a tool that Apple provides and maintains for higher level, automated, testing of iOS applications. Tests are written in JavaScript, adhering to an API defined by Apple.

Writing tests can be made easier by relying on accessibility labels for user interface elements in your application. Don’t worry though, if you don’t have these defined, there are alternatives available.

The UI Automation API lacks the typical xUnit based format for writing tests. One difference with unit testing is that the tester needs to manually log success and failures. UI Automation tests are run from the Automation instrument within the Instruments tool that comes with Apple's developer tools. The tests can be run in the iOS Simulator or on a physical device.

3. Writing UI Automation Tests

Step 1: Open the Sample Project

I’ve updated the sample project used in the previous tutorial on iOS testing with some additional user interface elements that provide some useful hooks for adding UI Automation tests. Download the project from GitHub. Open the project and run the application to make sure that everything is working as expected. You should see a user interface similar to the one shown below.

Sample Application screenshot

Before we write any tests, feel free to try out the sample application to become familiar with its functionality. As a user, you can enter text in the text field and tap the button to see a label on the screen that displays the reversed, inputted string.

Step 2: Create a UI Automation Test

Now that you’re familiar with the sample application, it's time to add a UI Automation test. UI Automation is a tool that can be found in Instruments. To run the sample application in Instruments, select Product > Profile from Xcode's menu. Select Automation from the list of tools.

Instrument chooser screenshot

The main Instruments window will open with a single instrument ready to run, the Automation instrument (the Automation instrument executes UI Automation test cases). You'll also see an area in the bottom half of the window that looks like a text editor. This is the script editor. This is where you will write your UI Automation tests. For this first test, follow the below instructions, adding each line to the script in the script editor.

Start by storing a reference to the text field in a variable.

var inputField = target.frontMostApp().mainWindow().textFields()["Input Field”];

Set the text field's value.

inputField.setValue("hi”);

Verify that the value was set successfully and, if it was, pass the test. Fail the test if it wasn't.

if (inputField.value() != "hi") UIALogger.logFail("The Input Field was NOT able to be set with the string!");
else UIALogger.logPass("The Input Field was able to be set with the string!");

While this test is fairly trivial, it does have value. We've just written a test that tests the presence of a text field when the application is launched and that tests if a random string can be set as the text field's value. If you don't believe me, then remove the text field from the storyboard and run the test. You'll see that it fails.

This test demonstrates three important pieces of writing UI Automation tests. First, it shows you how to access a simple user interface element, the text field. Specifically, we access a dictionary of all text fields on the base view of the application via target.frontMostApp().mainWindow().textFields() and we then find the text field we are interested in by looking for the one with key Input Field. This key is actually the accessibility label of the text field. In this case, it's defined in the storyboard. We can also set the accessibility label in code using the accessibilityLabel property on NSObject.

Accessing the application's main window, the front most application, and the target are common when working with UI Automation. I'll show you how to make this easier and less verbose later in this tutorial.

Second, this shows you that you can interact with user interface elements on the screen. In this case, we set the text field's value, mimicking the user interacting with the application by entering text into the text field.

And third, the example also shows a technique for verifying what happens in the application. If the value is successfully set, the test passes. If the value isn't set, the test fails.

Step 3: Saving Tests

While writing tests in the script editor is convenient, it quickly becomes cumbersome and difficult to maintain. If you quit Instruments, any unsaved changes are discarded. We need to save the tests we write. Simply copy and paste your test into a new document in your favorite text editor and save it. You can find the tests created in this tutorial in the sample project under Jumblify/JumblifyTests/AutomationTests.js.

To run the test, select the middle tab in the pane on the right, next to the script editor, and select Add > Import.

Instruments screenshot

You'll be prompted to select the script to import. Navigate to the saved script and import it. You can still change the script in the script editor. Any changes will be automatically saved in the external file you created.

Step 4: Tapping a Button

Let's update our test to test interaction with the button. Our test already adds text to the text field so we only need to add code to tap the button. Let's first consider how to find the button in the view so that it can be tapped. There are at least three ways to accomplish this and each approach has its tradeoffs.

Approach 1

We can programmatically tap an (X, Y) coordinate on the screen. We do this with the following line of code:

target.tap({x: 8.00, y: 50.00});

Of course, I have no idea if those are even the coordinates of the button on the screen and I'm not going to worry about that, because this approach is not the right tool for this job. I'm only mentioning it so you know it exists. Using the tap method on target to tap a button is error-prone, because that button may not always be at that specific coordinate.

Approach 2

It's also possible to find the button by searching the array of buttons of the main window, similar to how we accessed the text field in the first test. Instead of accessing the button directly using a key, we can retrieve an array of buttons on the main window and hard code an array index to get a reference to the button.

target.frontMostApp().mainWindow().buttons()[0].tap();

This approach is a little better. We're not hard-coding a coordinate, but we are hard-coding an array index to find the button. If we happen to add another button on the page, it may accidentally break this test.

Approach 3

This brings me to the third way to find the button on the page, using accessibility labels. By using an accessibility label, we can directly access the button just liked we'd find an object in a dictionary using a key.

target.frontMostApp().mainWindow().buttons()["Jumblify Button"].tap();

However, if you add the above line to the script and run it, you'll get an error.

Instruments Error Message Screenshot

This is because we haven't defined the accessibility label for the button yet. To do that, flip over to Xcode and open the project's storyboard. Find the button in the view and open the Identity Inspector on the right (View > Utilities > Identity Inspector). Ensure that Accessibility is enabled and set the Label for the button to Jumblify Button.

Interface Builder Accessibility Inspector Screenshot

To run the test again, you'll need to run the application from Xcode by selecting Product >Run and then profile the application again by selecting Product >Profile. This runs the tests and each test should pass now.

Step 5: Verify the Jumbled String 

As I mentioned earlier, our application takes a string of text as input, and, when the user taps the button, displays the reversed string. We need to add one more test to verify that the input string is properly reversed. To verify that the UILabel is populated with the correct string, we need to figure out how to reference the UILabel and verify the string it displays. This is a common problem when writing automation tests, that is, figuring out how to reference an element in the application to make an assertion on it.

There is a method on nearly every object in the UI Automation API, logElementTree. This method logs the nested elements of a given element. This is very useful to understand the hierarchy of elements in the application and helps to to figure out how to target a specific element.

Let's see how this works by logging the element tree of the main window. Take a look at the following line of code.

target.frontMostApp().mainWindow().logElementTree();

Adding this line to the test script results in the following output:

Instruments logElementTree screenshot

As you can see, there's a UIAStaticText subelement of the UIAWindow and you can also see that it has a name of ih, which also happens to be the reversed string we need to verify. Now, to complete our test, we just need to add code to access this element and verify that it's present.

Why do we only need to verify if the UIAStaticText element is present? Because the element's name is the reversed string of the input string, verifying it's presence confirms that the string was correctly reversed. If the element doesn't exist when referenced by name–the reversed string—then it means the string wasn't correctly reversed.

var stringResult = target.frontMostApp().mainWindow().staticTexts()["ih"];
if (! stringResult.isValid()) UIALogger.logFail("The output text was NOT set with the correctly reversed string!");
else UIALogger.logPass("The output text was set with the correctly reversed string!");

4. Scratching the Surface

There are so many other ways that an end user can interact with an iOS device while using your app. This means that there are many other ways that you can use UI Automation to simulate these interactions. Rather than attempt to capture a comprehensive list of these interactions, I'll direct you to the UI Automation reference documentation.

For each type of object that you can interact with, you can view the list of methods available on that object. Some methods are for retrieving attributes about the object while others are for simulating touch interaction, such as flickInsideWithOptions on UIAWindow.

Recording a Session

As you attempt to test more and more complicated apps with UI Automation, you'll find that sometimes it's pretty tedious to repeatedly use logElementTree to find the element you're looking for. This also becomes tedious and complex for applications with a complex view hierarchy or navigation. In these cases, you can use another feature of Instruments to record a set of user interactions. What's even cooler is that Instruments generates the UI Automation JavaScript code that's needed to reproduce the recorded interactions. Here's how you can try it out for yourself.

In Instruments and with the Automation instrument selected, look for the record button at the bottom of the window.

Instruments screenshot showing record button

If you click the record button, Instruments will start a recording session as shown in the screenshot below.

Instruments Screenshot showing capture in progress

Instruments will launch your application in the iOS Simulator and you'll be able to interact with it. Instruments will generate a script based on your interactions in real time. Give it a try. Rotate the iOS Simulator, tap at random locations, perform a swipe gesture, etc. It's a really useful way to help explore the possibilities of UI Automation.

Avoiding a Monolithic Code Base

As you can probably foresee, if we continue to add more test to the test file we've created in the same method, it will quickly become hard to maintain. What can we do to prevent that from happening. In my tests, I do two things to solve this problem:

  • One test for one function: This implies that the tests we write need to be focused on a specific piece of functionality. I'll even give it an appropriate name, such as testEmptyInputField.
  • Group related tests in one file: I also group related tests in the same file. This keeps the code in one file manageable. This also makes it easier to test separate pieces of functionality by executing the tests in a specific file. In addition, you can create a master script in which you call the functions or tests you've grouped in other test files.

In the following code snippet, we import a JavaScript file and this makes the functions in that JavaScript file available to us.

#import “OtherTests.js”

Conclusion

In this tutorial, you've learnt the value of higher level testing and how UI Automation can help to fill that gap. It's another tool in your toolbox to help ensure you ship reliable and robust applications.

References

UI Automation JavaScript Reference

2014-12-15T18:15:05.000Z2014-12-15T18:15:05.000ZAndy Obusek

Introduction to iOS Testing With UI Automation

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-22730

Just imagine being able to write scripts that automatically interact with your iOS application and be able to verify the results. With UI Automation you can. UI Automation is a tool provided by Apple to perform a higher level of testing on your iOS application beyond anything achievable with XCTest.

1. White Box versus Black Box Testing

You might have heard the comparison of white box testing versus black box testing with regard to how one might test a piece of software. If you're not familiar with these concepts, let me explain how they work.

White Box Testing

Imagine there's a piece of software running inside a box. With white box testing, you can see inside the box and look at all the gritty pieces of how the software works, and then make educated decisions on how to test the software. You can also have deeper level hooks into the software from the tests that you write.

Unit testing is white box testing. When writing unit tests, the tester has fine-grained access to the code under test. The tester can actually write tests that leverage the software under test at the method, or unit, level.

In iOS software development we use the XCTest framework to perform this type of testing. Have a look at another tutorial I wrote on getting started with XCTest.

Black Box Testing

In black box testing, the box is opaque. The tester cannot see inside the box. The tester cannot access and doesn't know about the implementation of the code base to write tests. Instead, the tester is forced to use the application as an end user would by interacting with the application and waiting for its response, verifying the results.

There are at least two ways to execute this type of testing.

  • A tester who repeatedly and manually performs a number of predefined steps and visually verifies the results.
  • Use specialized tools to test the application with APIs that behave similar to how a human interacts.

In iOS application development, Apple provides a tool called UI Automation to perform black box testing.

2. What is UI Automation? 

UI Automation is a tool that Apple provides and maintains for higher level, automated, testing of iOS applications. Tests are written in JavaScript, adhering to an API defined by Apple.

Writing tests can be made easier by relying on accessibility labels for user interface elements in your application. Don’t worry though, if you don’t have these defined, there are alternatives available.

The UI Automation API lacks the typical xUnit based format for writing tests. One difference with unit testing is that the tester needs to manually log success and failures. UI Automation tests are run from the Automation instrument within the Instruments tool that comes with Apple's developer tools. The tests can be run in the iOS Simulator or on a physical device.

3. Writing UI Automation Tests

Step 1: Open the Sample Project

I’ve updated the sample project used in the previous tutorial on iOS testing with some additional user interface elements that provide some useful hooks for adding UI Automation tests. Download the project from GitHub. Open the project and run the application to make sure that everything is working as expected. You should see a user interface similar to the one shown below.

Sample Application screenshot

Before we write any tests, feel free to try out the sample application to become familiar with its functionality. As a user, you can enter text in the text field and tap the button to see a label on the screen that displays the reversed, inputted string.

Step 2: Create a UI Automation Test

Now that you’re familiar with the sample application, it's time to add a UI Automation test. UI Automation is a tool that can be found in Instruments. To run the sample application in Instruments, select Product > Profile from Xcode's menu. Select Automation from the list of tools.

Instrument chooser screenshot

The main Instruments window will open with a single instrument ready to run, the Automation instrument (the Automation instrument executes UI Automation test cases). You'll also see an area in the bottom half of the window that looks like a text editor. This is the script editor. This is where you will write your UI Automation tests. For this first test, follow the below instructions, adding each line to the script in the script editor.

Start by storing a reference to the text field in a variable.

var inputField = target.frontMostApp().mainWindow().textFields()["Input Field”];

Set the text field's value.

inputField.setValue("hi”);

Verify that the value was set successfully and, if it was, pass the test. Fail the test if it wasn't.

if (inputField.value() != "hi") UIALogger.logFail("The Input Field was NOT able to be set with the string!");
else UIALogger.logPass("The Input Field was able to be set with the string!");

While this test is fairly trivial, it does have value. We've just written a test that tests the presence of a text field when the application is launched and that tests if a random string can be set as the text field's value. If you don't believe me, then remove the text field from the storyboard and run the test. You'll see that it fails.

This test demonstrates three important pieces of writing UI Automation tests. First, it shows you how to access a simple user interface element, the text field. Specifically, we access a dictionary of all text fields on the base view of the application via target.frontMostApp().mainWindow().textFields() and we then find the text field we are interested in by looking for the one with key Input Field. This key is actually the accessibility label of the text field. In this case, it's defined in the storyboard. We can also set the accessibility label in code using the accessibilityLabel property on NSObject.

Accessing the application's main window, the front most application, and the target are common when working with UI Automation. I'll show you how to make this easier and less verbose later in this tutorial.

Second, this shows you that you can interact with user interface elements on the screen. In this case, we set the text field's value, mimicking the user interacting with the application by entering text into the text field.

And third, the example also shows a technique for verifying what happens in the application. If the value is successfully set, the test passes. If the value isn't set, the test fails.

Step 3: Saving Tests

While writing tests in the script editor is convenient, it quickly becomes cumbersome and difficult to maintain. If you quit Instruments, any unsaved changes are discarded. We need to save the tests we write. Simply copy and paste your test into a new document in your favorite text editor and save it. You can find the tests created in this tutorial in the sample project under Jumblify/JumblifyTests/AutomationTests.js.

To run the test, select the middle tab in the pane on the right, next to the script editor, and select Add > Import.

Instruments screenshot

You'll be prompted to select the script to import. Navigate to the saved script and import it. You can still change the script in the script editor. Any changes will be automatically saved in the external file you created.

Step 4: Tapping a Button

Let's update our test to test interaction with the button. Our test already adds text to the text field so we only need to add code to tap the button. Let's first consider how to find the button in the view so that it can be tapped. There are at least three ways to accomplish this and each approach has its tradeoffs.

Approach 1

We can programmatically tap an (X, Y) coordinate on the screen. We do this with the following line of code:

target.tap({x: 8.00, y: 50.00});

Of course, I have no idea if those are even the coordinates of the button on the screen and I'm not going to worry about that, because this approach is not the right tool for this job. I'm only mentioning it so you know it exists. Using the tap method on target to tap a button is error-prone, because that button may not always be at that specific coordinate.

Approach 2

It's also possible to find the button by searching the array of buttons of the main window, similar to how we accessed the text field in the first test. Instead of accessing the button directly using a key, we can retrieve an array of buttons on the main window and hard code an array index to get a reference to the button.

target.frontMostApp().mainWindow().buttons()[0].tap();

This approach is a little better. We're not hard-coding a coordinate, but we are hard-coding an array index to find the button. If we happen to add another button on the page, it may accidentally break this test.

Approach 3

This brings me to the third way to find the button on the page, using accessibility labels. By using an accessibility label, we can directly access the button just liked we'd find an object in a dictionary using a key.

target.frontMostApp().mainWindow().buttons()["Jumblify Button"].tap();

However, if you add the above line to the script and run it, you'll get an error.

Instruments Error Message Screenshot

This is because we haven't defined the accessibility label for the button yet. To do that, flip over to Xcode and open the project's storyboard. Find the button in the view and open the Identity Inspector on the right (View > Utilities > Identity Inspector). Ensure that Accessibility is enabled and set the Label for the button to Jumblify Button.

Interface Builder Accessibility Inspector Screenshot

To run the test again, you'll need to run the application from Xcode by selecting Product >Run and then profile the application again by selecting Product >Profile. This runs the tests and each test should pass now.

Step 5: Verify the Jumbled String 

As I mentioned earlier, our application takes a string of text as input, and, when the user taps the button, displays the reversed string. We need to add one more test to verify that the input string is properly reversed. To verify that the UILabel is populated with the correct string, we need to figure out how to reference the UILabel and verify the string it displays. This is a common problem when writing automation tests, that is, figuring out how to reference an element in the application to make an assertion on it.

There is a method on nearly every object in the UI Automation API, logElementTree. This method logs the nested elements of a given element. This is very useful to understand the hierarchy of elements in the application and helps to to figure out how to target a specific element.

Let's see how this works by logging the element tree of the main window. Take a look at the following line of code.

target.frontMostApp().mainWindow().logElementTree();

Adding this line to the test script results in the following output:

Instruments logElementTree screenshot

As you can see, there's a UIAStaticText subelement of the UIAWindow and you can also see that it has a name of ih, which also happens to be the reversed string we need to verify. Now, to complete our test, we just need to add code to access this element and verify that it's present.

Why do we only need to verify if the UIAStaticText element is present? Because the element's name is the reversed string of the input string, verifying it's presence confirms that the string was correctly reversed. If the element doesn't exist when referenced by name–the reversed string—then it means the string wasn't correctly reversed.

var stringResult = target.frontMostApp().mainWindow().staticTexts()["ih"];
if (! stringResult.isValid()) UIALogger.logFail("The output text was NOT set with the correctly reversed string!");
else UIALogger.logPass("The output text was set with the correctly reversed string!");

4. Scratching the Surface

There are so many other ways that an end user can interact with an iOS device while using your app. This means that there are many other ways that you can use UI Automation to simulate these interactions. Rather than attempt to capture a comprehensive list of these interactions, I'll direct you to the UI Automation reference documentation.

For each type of object that you can interact with, you can view the list of methods available on that object. Some methods are for retrieving attributes about the object while others are for simulating touch interaction, such as flickInsideWithOptions on UIAWindow.

Recording a Session

As you attempt to test more and more complicated apps with UI Automation, you'll find that sometimes it's pretty tedious to repeatedly use logElementTree to find the element you're looking for. This also becomes tedious and complex for applications with a complex view hierarchy or navigation. In these cases, you can use another feature of Instruments to record a set of user interactions. What's even cooler is that Instruments generates the UI Automation JavaScript code that's needed to reproduce the recorded interactions. Here's how you can try it out for yourself.

In Instruments and with the Automation instrument selected, look for the record button at the bottom of the window.

Instruments screenshot showing record button

If you click the record button, Instruments will start a recording session as shown in the screenshot below.

Instruments Screenshot showing capture in progress

Instruments will launch your application in the iOS Simulator and you'll be able to interact with it. Instruments will generate a script based on your interactions in real time. Give it a try. Rotate the iOS Simulator, tap at random locations, perform a swipe gesture, etc. It's a really useful way to help explore the possibilities of UI Automation.

Avoiding a Monolithic Code Base

As you can probably foresee, if we continue to add more test to the test file we've created in the same method, it will quickly become hard to maintain. What can we do to prevent that from happening. In my tests, I do two things to solve this problem:

  • One test for one function: This implies that the tests we write need to be focused on a specific piece of functionality. I'll even give it an appropriate name, such as testEmptyInputField.
  • Group related tests in one file: I also group related tests in the same file. This keeps the code in one file manageable. This also makes it easier to test separate pieces of functionality by executing the tests in a specific file. In addition, you can create a master script in which you call the functions or tests you've grouped in other test files.

In the following code snippet, we import a JavaScript file and this makes the functions in that JavaScript file available to us.

#import “OtherTests.js”

Conclusion

In this tutorial, you've learnt the value of higher level testing and how UI Automation can help to fill that gap. It's another tool in your toolbox to help ensure you ship reliable and robust applications.

References

UI Automation JavaScript Reference

2014-12-15T18:15:05.000Z2014-12-15T18:15:05.000ZAndy Obusek

iOS 8: What's New in SpriteKit, Part 1

$
0
0

This tutorial gives an overview of the new features of the SpriteKit framework that were introduced in iOS 8. The new features are designed to make it easier to support advanced game effects and include support for custom OpenGL ES fragment shaders, lighting, shadows, advanced new physics effects and animations, and integration with SceneKit. In this tutorial, you'll learn how to implement these new features.

Before starting the tutorial, I would like to thank Mélodie Deschans (Wicked Cat) for providing us with the game art used in this series.

Prerequisites

This tutorial assumes that you are familiar with both SpriteKit and Objective-C. To interact with the shader and the scene editor without input lag, I recommend that you download and install Xcode 6.1 or later. Download the Xcode project from GitHub, if you'd like to follow along.

Series Format

This series is split up into two tutorials and covers the most important new features of the SpriteKit framework. In the first part, we take a look at shaders, lighting, and shadows. In the second part, I'll talk about physics and SceneKit integration.

While each part of this series stands on its own, I recommend following along step-by-step to properly understand the new features of the SpriteKit framework. After reading both parts, you'll be able to create both simple and more advanced games using the new features of the SpriteKit framework.

1. Introduction

SpriteKit provides a rendering pipeline that can be used to animate sprites. The rendering pipeline contains a rendering loop that alternates between determining the contents and rendering frames. The developer determines the contents of each frame and how it changes. SpriteKit uses the GPU of the device to efficiently render each frame.

The SpriteKit framework is available on both iOS and OS X, and it supports many different kinds of content, including sprites, text, shapes, and video.

The new SpriteKit features introduced in iOS 8 are:

  • Shaders: Shaders customize how things are drawn to the screen. They are useful to add or modify effects. The shaders are based on the OpenGL ES fragment shader. Each effect is applied on a per-pixel basis. You use a C-like programming language to program the shader and it can be deployed to both iOS and OS X. A shader can be applied to a scene or to supported classes, SKSpriteNode, SKShapeNode, SKEmitterNode, SKEffectNode, and SKScene.
  • Lighting & Shadows: Lighting is used to illuminate a scene or sprite. Each light supports color, shadows, and fall-off configurations. You can have up to eight different lights per sprite.
  • Physics: Physics are used to add realism to games. SpriteKit introduces four new types of physical properties, per-pixel physics, constraints, inverse kinematics, and physics fields. The per-pixel properties provide an accurate representation of the interaction of an object. Thanks to a variety of predefined constraints, boilerplate code can be removed in scene updates. Inverse kinematics are used to represent joints using sprites (anchor points, parent-child relationships, maximum and minimum rotation, and others). Finally, you can create physics fields to simulate gravity, drag, and electromagnetic forces. These new physics features make complex simulations much easier to implement.
  • SceneKit Integration: Through SceneKit, you can include 3D content in SpriteKit applications and control them like regular SKNode instances. It renders 3D content directly inside the SpriteKit rendering pipeline. You can import existing .dae or .abc files to SKScene.

2. Project Overview

I've created an Xcode project to get us started. It allows us to immediately start using the new SpriteKit features. However, there are a few things to be aware of.

  • The project uses Objective-C, targeting only iPhone devices running iOS 8.1. However, you can change the target device if you like.
  • Under Resources >Editor, you'll find three SpriteKit scene (.sks) files. In this series, you'll add a fourth SpriteKit scene file. Each scene file is responsible for a specific tutorial section.
  • A shader can be initialized one of two ways. The first uses the traditional method while the second uses the new SpriteKit scene method. The objective is that you learn the differences and, in future projects, choose the one that fits your needs.
  • If you instantiate an SKScene object using a SpriteKit scene file, you'll always use the unarchiveFromFile: method. However, it is mandatory that you add for each SpriteKit scene file the corresponding SKScene class.
  • If you instantiate an SKScene object without using a SpriteKit scene file, you should use the initWithSize: method like you used to do in earlier versions of iOS.
  • The GameViewController and GameScene classes contain a method named unarchiveFromFile:. This method transforms graphical objects defined in a SpriteKit scene and turn them into an SKScene object. The method uses the instancetype keyword, since it returns an instance of the class it calls, in this case the SKScene class.

Download the project and take a moment to browse its folders, classes, and resources. Build and run the project on a physical device or in the iOS Simulator. If the application is running without problems, then it's time to start exploring the new iOS 8 SpriteKit features.

3. Shaders

Step 1: Create SpriteKit Scene

In the Xcode project, add a new SpriteKit Scene file. Choose File >New >File... and, from the Resource section, choose SpriteKit Scene. Name it ShaderSceneEditor and click Create. A grey interface should appear.

Step 2: SpriteKit Scene Configuration

In the SKNode Inspector on the right, you should see two properties, Size and Gravity. Set the Size property taking into account your device screen resolution and set Gravity to 0.0.

SKNode Inspector

You'll notice that the size of the yellow rectangle changes to reflect the changes you've made. The yellow rectangle is your virtual device interface. It shows you how objects are displayed on your device.

Step 3: Add a Color Sprite

Inside the Object Library on the right, select the Color Sprite and drag it into the yellow rectangle.

Object Library

Select the color sprite and open the SKNode Inspector on the right to see its properties.

SKNode Inspector of color sprite

You can interact with the object in real time. Any changes you make are displayed in the editor. You can play with Position, Size, Color, or Scale, but what you really want is the Custom Shader option. However, you'll notice that there's no shader available yet.

Step 4: Add a Custom Shader: Method 1

Add a new empty source file (File > New >File...), choose Other > Empty from the iOS section, and name it Shader01.fsh. Add the following code to the file you've just created.

void main()
{
    float currTime = u_time;
    vec2 uv = v_tex_coord;
    vec2 circleCenter = vec2(0.5, 0.5);
    vec3 circleColor = vec3(0.8, 0.5, 0.7);
    vec3 posColor = vec3(uv, 0.5 + 0.5 * sin(currTime)) * circleColor;
    float illu = pow(1. - distance(uv, circleCenter), 4.) * 1.2;
    illu *= (2. + abs(0.4 + cos(currTime * -20. + 50. * distance(uv, circleCenter)) / 1.5));
    gl_FragColor = vec4(posColor * illu * 2., illu * 2.) * v_color_mix.a;
}

The above code block generates a fusion of colors taking into consideration the center of a circle and its edge. Apple showed this shader in their SpriteKit session during WWDC 2014.

Return to the editor, select the color sprite object, and in the Custom Shader select the shader you've just created. You should now see the shader in action.

Custom Shader

Step 5: Real Time Feedback

Programming shaders using Xcode and SpriteKit is easy, because you receive real time feedback. Open the Assistant Editor and configure it to show both the SpriteKit scene as well as the shader you've just created.

Let's see how this works. Introduce a runtime error in the shader, for example, by changing a variable's name and save the changes to see the result.

Real-time feedback

As you can see, Xcode provides a quick and easy way to alert the developer about possible shader errors. The advantage is that you don't need to build or deploy your application to your device or the iOS Simulator to see if everything is running fine.

It's now time to add another shader and manually program it.

Step 6: Add a Custom Shader: Method 2

In this step, you'll learn how to:

  • call a shader manually
  • assign a shader to a SpriteKit object
  • create and send properties to a shader

In this step, you'll add a custom SKSpriteNode at the position of the user's tap and then you'll use a shader to modify the texture color of the SKSpriteNode.

The first step is to add another shader. Name the new shader shader02.fsh and add the following code block to the shader's file:

void main()
{
    gl_FragColor = texture2D(myTexture,v_tex_coord) * vec4(1, 0.2, 0.2, 1);
}

Open the implementation file of the ShaderScene class. The first step is to detect whether the user has tapped the screen and find the location of the tap. For that, we need to implement the touchesBegan:withEvent: method. Inside this method, add a SKSpriteNode instance at the location of the tap. You can use any sprite you like. I've used Spaceship.png, which is already included in the project.

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
    for (UITouch *touch in touches){
        CGPoint location = [touch locationInNode:self];
        // Create the node
        SKSpriteNode *space = [SKSpriteNode spriteNodeWithImageNamed:@"Spaceship.png"];
        space.position = CGPointMake(location.x, location.y);
        [self addChild:space];
    }
}

We then create a SKShader object and initialize it using the shader02.fsh file:

SKShader *shader = [SKShader shaderWithFileNamed:@"shader02.fsh"];

You may have noticed that the shader's source file references a myTexture object. This isn't a predefined shader property, but a reference your application needs to pass to the shader. The following code snippet illustrates how to do this.

shader.uniforms = @[ [SKUniform uniformWithName:@"myTexture" texture:[SKTexture textureWithImageNamed:@"Spaceship.png"]] ];

We then add the shader to the SKSpriteNode object.

space.shader = shader;

This is what the touchesBegan:withEvent: method should look like:

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
    for (UITouch *touch in touches){
        CGPoint location = [touch locationInNode:self];
        // Create the node
        SKSpriteNode *space = [SKSpriteNode spriteNodeWithImageNamed:@"Spaceship.png"];
        space.position = CGPointMake(location.x, location.y);
        [self addChild:space];
        SKShader *shader = [SKShader shaderWithFileNamed:@"shader02.fsh"];
        shader.uniforms = @[ [SKUniform uniformWithName:@"myTexture" texture:[SKTexture textureWithImageNamed:@"Spaceship.png"]] ];
        space.shader = shader;
    }
}

Build and run your project. Tap the Shaders (initWithSize) button and tap the screen. Every time you tap the screen, a spaceship sprite is added with a modified texture.

Example of shaders using the initWithSize button

Using this option, you see that the first shader is not presented on screen. This happens because that shader was created and configured inside the SpriteKit Scene editor. To see it, you need to initialize the ShaderScene class using the unarchiveFromFile: method.

In GameScene.m, you should see a section that detects and parses the user's taps in touchesBegan:withEvent:. In the second if clause, we initialize a ShaderScene instance as shown below.

if ([node.name isEqualToString:@"buttonShaderCoder"]) {
    ShaderScene *scene = [ShaderScene unarchiveFromFile:@"ShaderSceneEditor"];
    [self.scene.view presentScene:scene];
}

Build and run your project again, tap the Shaders (initWithCoder) button, and tap the screen. Both shaders are now active in a single SpriteKit scene.

Example of shaders using initWithCoder button

4. Lighting and Shadows

Lighting and shadows are two properties that play together. The aim of this section is to add several light nodes and sprites, and play with their properties.

Step 1: Add a Light

Open LightingSceneEditor.sks and browse the objects inside the Media Library on the right. In the Media Library, you can see the resources included in the project.

Select and drag background.jpg to the yellow rectangle. If you haven't changed the default scene resolution, the image should fit inside the rectangle.

When you select the sprite, you'll notice that it has several properties like Position, Size, Z Position, Lighting Mask, Shadow Casting Mask, Physics Definition, and many others.

SKSpriteNode Properties

Feel free to play with these properties. For now, though, it's important that you leave the properties at their defaults. Drag a Light object from the Object Library on the right onto the background sprite. The position of the light isn't important, but the light's other properties are.

You can configure the Color, Shadow, and Ambient color to configure the light and shadow. The Z Position is the node's height relative to its parent node. Set it to 1. The Lighting Mask defines which categories this light belongs to. When a scene is rendered, a light’s categoryBitMask property is compared to each sprite node's lightingBitMask, shadowCastBitMask, and shadowedBitMask properties. If the values match, that sprite interacts with the light. This enables you to define and use multiple lights that interact with one or more objects.

You've probably noticed that the background has not changed after adding the light. That happens because the lighting mask of the light and the background are different. You need to set the background's lighting mask to that of the light, which is 1 in our example.

Update the background in the SKNode Inspector and press enter. The effect of this change is immediate. The light now illuminates the background based on its position. You can modify the light's position to see the interaction between the background and light nodes in real time.

To increase the realism of the background or emphasize one of its features, play with the Smoothness and Contrast properties. Play with the values to see the changes in real time.

Step 2: Populate the Scene

It's now time to add a few objects that interact with the light node. In the Media Library, find the croquette-o.png and croquette-x.png sprites and add them to the scene.

Each sprite needs to be configured individually. Select each sprite and set the Lighting Mask, Shadow Cast Mask, andthe Z Position to 1. The lighting mask ensures that the sprite is affected by the light node while the shadow cast mask creates a real time shadow based on the position of the light node. Finally, set the Body Type (Physics Definition) to None. Do this for both sprites.

Physics Definition

You should have noticed that, even after setting the properties of lighting and shadow, you cannot see the interaction between the light and the nodes. For that, you need to build and run the project on a physical device or in the Simulator.

Lighting result

Step 3: Manual Lighting

You already know how to add lights using the scene editor. Let's see how to add a light without using the scene editor.

Open the LightingScene.m and inside the didMoveToView: method we create a SKSpriteNode object and a SKLightNode object.

For the SKSpriteNode object, we use the Wicked-Cat.png sprite. The position of the node isn't that important, but the values of zPosition, shadowCastBitMask, and lightingBitMask are. Because SpriteKit parses the data sequentially, you need to set the node's zPosition to 1 for this sprite to be visible, on top of the background sprite. We set shadowCastBitMask and lightingBitMask to 1.

This is what the didMoveToView: method looks like so far:

- (void)didMoveToView:(SKView *)view {
    SKSpriteNode *sprite = [SKSpriteNode spriteNodeWithImageNamed:@"Wicked-Cat.png"];
    [sprite setPosition:CGPointMake(self.frame.size.width/2, self.frame.size.height/2)];
    [sprite setScale:0.6];
    [sprite setZPosition:1];
    [sprite setShadowCastBitMask:1];
    [sprite setLightingBitMask:1];
    [self addChild:sprite];
}

Next, let's add the SKLightNode object. You should take special attention to the categoryBitMask property. If you set it to 1, this light will interact with every sprite. Name it light and set zPosition to 1.

The complete snippet for the SKLightNode should look like this:

SKLightNode* light = [[SKLightNode alloc] init];
[light setName:@"light"];
[light setPosition:CGPointMake(100, 100)];
[light setCategoryBitMask:1];
[light setFalloff:1.5];
[light setZPosition:1];
[light setAmbientColor:[UIColor whiteColor]];
[light setLightColor:[[UIColor alloc] initWithRed:1.0 green:0.0 blue:0.0 alpha:.5]];
[light setShadowColor:[[UIColor alloc] initWithRed:0.9 green:0.25 blue:0.0 alpha:.5]];
[self addChild:light];

Step 4: Change the Light Location

At this point you have a second light. But let's add some user interaction. For that you should add the touchesMoved:withEvent: method and change the light position, taking into consideration the tap location.

-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {    
    for (UITouch *touch in touches) {
        CGPoint location = [touch locationInNode:self];
        [self childNodeWithName:@"light"].position = CGPointMake(location.x, location.y);
    }
}

Finally, build and run your application. Tap the Lighting button and you should see something similar to the below screenshot:

Complete lighting example

Conclusion

This concludes the first tutorial in our two-part series on the new SpriteKit framework features introduced in iOS 8. In this part, you learned to create custom shaders and lighting effects using both the SpriteKit Scene editor and through code. If you have any questions or comments, as always, feel free to drop a line in the comments.

2014-12-17T15:15:44.000Z2014-12-17T15:15:44.000ZOrlando Pereira

iOS 8: What's New in SpriteKit, Part 1

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-22387

This tutorial gives an overview of the new features of the SpriteKit framework that were introduced in iOS 8. The new features are designed to make it easier to support advanced game effects and include support for custom OpenGL ES fragment shaders, lighting, shadows, advanced new physics effects and animations, and integration with SceneKit. In this tutorial, you'll learn how to implement these new features.

Before starting the tutorial, I would like to thank Mélodie Deschans (Wicked Cat) for providing us with the game art used in this series.

Prerequisites

This tutorial assumes that you are familiar with both SpriteKit and Objective-C. To interact with the shader and the scene editor without input lag, I recommend that you download and install Xcode 6.1 or later. Download the Xcode project from GitHub, if you'd like to follow along.

Series Format

This series is split up into two tutorials and covers the most important new features of the SpriteKit framework. In the first part, we take a look at shaders, lighting, and shadows. In the second part, I'll talk about physics and SceneKit integration.

While each part of this series stands on its own, I recommend following along step-by-step to properly understand the new features of the SpriteKit framework. After reading both parts, you'll be able to create both simple and more advanced games using the new features of the SpriteKit framework.

1. Introduction

SpriteKit provides a rendering pipeline that can be used to animate sprites. The rendering pipeline contains a rendering loop that alternates between determining the contents and rendering frames. The developer determines the contents of each frame and how it changes. SpriteKit uses the GPU of the device to efficiently render each frame.

The SpriteKit framework is available on both iOS and OS X, and it supports many different kinds of content, including sprites, text, shapes, and video.

The new SpriteKit features introduced in iOS 8 are:

  • Shaders: Shaders customize how things are drawn to the screen. They are useful to add or modify effects. The shaders are based on the OpenGL ES fragment shader. Each effect is applied on a per-pixel basis. You use a C-like programming language to program the shader and it can be deployed to both iOS and OS X. A shader can be applied to a scene or to supported classes, SKSpriteNode, SKShapeNode, SKEmitterNode, SKEffectNode, and SKScene.
  • Lighting & Shadows: Lighting is used to illuminate a scene or sprite. Each light supports color, shadows, and fall-off configurations. You can have up to eight different lights per sprite.
  • Physics: Physics are used to add realism to games. SpriteKit introduces four new types of physical properties, per-pixel physics, constraints, inverse kinematics, and physics fields. The per-pixel properties provide an accurate representation of the interaction of an object. Thanks to a variety of predefined constraints, boilerplate code can be removed in scene updates. Inverse kinematics are used to represent joints using sprites (anchor points, parent-child relationships, maximum and minimum rotation, and others). Finally, you can create physics fields to simulate gravity, drag, and electromagnetic forces. These new physics features make complex simulations much easier to implement.
  • SceneKit Integration: Through SceneKit, you can include 3D content in SpriteKit applications and control them like regular SKNode instances. It renders 3D content directly inside the SpriteKit rendering pipeline. You can import existing .dae or .abc files to SKScene.

2. Project Overview

I've created an Xcode project to get us started. It allows us to immediately start using the new SpriteKit features. However, there are a few things to be aware of.

  • The project uses Objective-C, targeting only iPhone devices running iOS 8.1. However, you can change the target device if you like.
  • Under Resources >Editor, you'll find three SpriteKit scene (.sks) files. In this series, you'll add a fourth SpriteKit scene file. Each scene file is responsible for a specific tutorial section.
  • A shader can be initialized one of two ways. The first uses the traditional method while the second uses the new SpriteKit scene method. The objective is that you learn the differences and, in future projects, choose the one that fits your needs.
  • If you instantiate an SKScene object using a SpriteKit scene file, you'll always use the unarchiveFromFile: method. However, it is mandatory that you add for each SpriteKit scene file the corresponding SKScene class.
  • If you instantiate an SKScene object without using a SpriteKit scene file, you should use the initWithSize: method like you used to do in earlier versions of iOS.
  • The GameViewController and GameScene classes contain a method named unarchiveFromFile:. This method transforms graphical objects defined in a SpriteKit scene and turn them into an SKScene object. The method uses the instancetype keyword, since it returns an instance of the class it calls, in this case the SKScene class.

Download the project and take a moment to browse its folders, classes, and resources. Build and run the project on a physical device or in the iOS Simulator. If the application is running without problems, then it's time to start exploring the new iOS 8 SpriteKit features.

3. Shaders

Step 1: Create SpriteKit Scene

In the Xcode project, add a new SpriteKit Scene file. Choose File >New >File... and, from the Resource section, choose SpriteKit Scene. Name it ShaderSceneEditor and click Create. A grey interface should appear.

Step 2: SpriteKit Scene Configuration

In the SKNode Inspector on the right, you should see two properties, Size and Gravity. Set the Size property taking into account your device screen resolution and set Gravity to 0.0.

SKNode Inspector

You'll notice that the size of the yellow rectangle changes to reflect the changes you've made. The yellow rectangle is your virtual device interface. It shows you how objects are displayed on your device.

Step 3: Add a Color Sprite

Inside the Object Library on the right, select the Color Sprite and drag it into the yellow rectangle.

Object Library

Select the color sprite and open the SKNode Inspector on the right to see its properties.

SKNode Inspector of color sprite

You can interact with the object in real time. Any changes you make are displayed in the editor. You can play with Position, Size, Color, or Scale, but what you really want is the Custom Shader option. However, you'll notice that there's no shader available yet.

Step 4: Add a Custom Shader: Method 1

Add a new empty source file (File > New >File...), choose Other > Empty from the iOS section, and name it Shader01.fsh. Add the following code to the file you've just created.

void main()
{
    float currTime = u_time;
    vec2 uv = v_tex_coord;
    vec2 circleCenter = vec2(0.5, 0.5);
    vec3 circleColor = vec3(0.8, 0.5, 0.7);
    vec3 posColor = vec3(uv, 0.5 + 0.5 * sin(currTime)) * circleColor;
    float illu = pow(1. - distance(uv, circleCenter), 4.) * 1.2;
    illu *= (2. + abs(0.4 + cos(currTime * -20. + 50. * distance(uv, circleCenter)) / 1.5));
    gl_FragColor = vec4(posColor * illu * 2., illu * 2.) * v_color_mix.a;
}

The above code block generates a fusion of colors taking into consideration the center of a circle and its edge. Apple showed this shader in their SpriteKit session during WWDC 2014.

Return to the editor, select the color sprite object, and in the Custom Shader select the shader you've just created. You should now see the shader in action.

Custom Shader

Step 5: Real Time Feedback

Programming shaders using Xcode and SpriteKit is easy, because you receive real time feedback. Open the Assistant Editor and configure it to show both the SpriteKit scene as well as the shader you've just created.

Let's see how this works. Introduce a runtime error in the shader, for example, by changing a variable's name and save the changes to see the result.

Real-time feedback

As you can see, Xcode provides a quick and easy way to alert the developer about possible shader errors. The advantage is that you don't need to build or deploy your application to your device or the iOS Simulator to see if everything is running fine.

It's now time to add another shader and manually program it.

Step 6: Add a Custom Shader: Method 2

In this step, you'll learn how to:

  • call a shader manually
  • assign a shader to a SpriteKit object
  • create and send properties to a shader

In this step, you'll add a custom SKSpriteNode at the position of the user's tap and then you'll use a shader to modify the texture color of the SKSpriteNode.

The first step is to add another shader. Name the new shader shader02.fsh and add the following code block to the shader's file:

void main()
{
    gl_FragColor = texture2D(myTexture,v_tex_coord) * vec4(1, 0.2, 0.2, 1);
}

Open the implementation file of the ShaderScene class. The first step is to detect whether the user has tapped the screen and find the location of the tap. For that, we need to implement the touchesBegan:withEvent: method. Inside this method, add a SKSpriteNode instance at the location of the tap. You can use any sprite you like. I've used Spaceship.png, which is already included in the project.

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
    for (UITouch *touch in touches){
        CGPoint location = [touch locationInNode:self];
        // Create the node
        SKSpriteNode *space = [SKSpriteNode spriteNodeWithImageNamed:@"Spaceship.png"];
        space.position = CGPointMake(location.x, location.y);
        [self addChild:space];
    }
}

We then create a SKShader object and initialize it using the shader02.fsh file:

SKShader *shader = [SKShader shaderWithFileNamed:@"shader02.fsh"];

You may have noticed that the shader's source file references a myTexture object. This isn't a predefined shader property, but a reference your application needs to pass to the shader. The following code snippet illustrates how to do this.

shader.uniforms = @[ [SKUniform uniformWithName:@"myTexture" texture:[SKTexture textureWithImageNamed:@"Spaceship.png"]] ];

We then add the shader to the SKSpriteNode object.

space.shader = shader;

This is what the touchesBegan:withEvent: method should look like:

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
    for (UITouch *touch in touches){
        CGPoint location = [touch locationInNode:self];
        // Create the node
        SKSpriteNode *space = [SKSpriteNode spriteNodeWithImageNamed:@"Spaceship.png"];
        space.position = CGPointMake(location.x, location.y);
        [self addChild:space];
        SKShader *shader = [SKShader shaderWithFileNamed:@"shader02.fsh"];
        shader.uniforms = @[ [SKUniform uniformWithName:@"myTexture" texture:[SKTexture textureWithImageNamed:@"Spaceship.png"]] ];
        space.shader = shader;
    }
}

Build and run your project. Tap the Shaders (initWithSize) button and tap the screen. Every time you tap the screen, a spaceship sprite is added with a modified texture.

Example of shaders using the initWithSize button

Using this option, you see that the first shader is not presented on screen. This happens because that shader was created and configured inside the SpriteKit Scene editor. To see it, you need to initialize the ShaderScene class using the unarchiveFromFile: method.

In GameScene.m, you should see a section that detects and parses the user's taps in touchesBegan:withEvent:. In the second if clause, we initialize a ShaderScene instance as shown below.

if ([node.name isEqualToString:@"buttonShaderCoder"]) {
    ShaderScene *scene = [ShaderScene unarchiveFromFile:@"ShaderSceneEditor"];
    [self.scene.view presentScene:scene];
}

Build and run your project again, tap the Shaders (initWithCoder) button, and tap the screen. Both shaders are now active in a single SpriteKit scene.

Example of shaders using initWithCoder button

4. Lighting and Shadows

Lighting and shadows are two properties that play together. The aim of this section is to add several light nodes and sprites, and play with their properties.

Step 1: Add a Light

Open LightingSceneEditor.sks and browse the objects inside the Media Library on the right. In the Media Library, you can see the resources included in the project.

Select and drag background.jpg to the yellow rectangle. If you haven't changed the default scene resolution, the image should fit inside the rectangle.

When you select the sprite, you'll notice that it has several properties like Position, Size, Z Position, Lighting Mask, Shadow Casting Mask, Physics Definition, and many others.

SKSpriteNode Properties

Feel free to play with these properties. For now, though, it's important that you leave the properties at their defaults. Drag a Light object from the Object Library on the right onto the background sprite. The position of the light isn't important, but the light's other properties are.

You can configure the Color, Shadow, and Ambient color to configure the light and shadow. The Z Position is the node's height relative to its parent node. Set it to 1. The Lighting Mask defines which categories this light belongs to. When a scene is rendered, a light’s categoryBitMask property is compared to each sprite node's lightingBitMask, shadowCastBitMask, and shadowedBitMask properties. If the values match, that sprite interacts with the light. This enables you to define and use multiple lights that interact with one or more objects.

You've probably noticed that the background has not changed after adding the light. That happens because the lighting mask of the light and the background are different. You need to set the background's lighting mask to that of the light, which is 1 in our example.

Update the background in the SKNode Inspector and press enter. The effect of this change is immediate. The light now illuminates the background based on its position. You can modify the light's position to see the interaction between the background and light nodes in real time.

To increase the realism of the background or emphasize one of its features, play with the Smoothness and Contrast properties. Play with the values to see the changes in real time.

Step 2: Populate the Scene

It's now time to add a few objects that interact with the light node. In the Media Library, find the croquette-o.png and croquette-x.png sprites and add them to the scene.

Each sprite needs to be configured individually. Select each sprite and set the Lighting Mask, Shadow Cast Mask, andthe Z Position to 1. The lighting mask ensures that the sprite is affected by the light node while the shadow cast mask creates a real time shadow based on the position of the light node. Finally, set the Body Type (Physics Definition) to None. Do this for both sprites.

Physics Definition

You should have noticed that, even after setting the properties of lighting and shadow, you cannot see the interaction between the light and the nodes. For that, you need to build and run the project on a physical device or in the Simulator.

Lighting result

Step 3: Manual Lighting

You already know how to add lights using the scene editor. Let's see how to add a light without using the scene editor.

Open the LightingScene.m and inside the didMoveToView: method we create a SKSpriteNode object and a SKLightNode object.

For the SKSpriteNode object, we use the Wicked-Cat.png sprite. The position of the node isn't that important, but the values of zPosition, shadowCastBitMask, and lightingBitMask are. Because SpriteKit parses the data sequentially, you need to set the node's zPosition to 1 for this sprite to be visible, on top of the background sprite. We set shadowCastBitMask and lightingBitMask to 1.

This is what the didMoveToView: method looks like so far:

- (void)didMoveToView:(SKView *)view {
    SKSpriteNode *sprite = [SKSpriteNode spriteNodeWithImageNamed:@"Wicked-Cat.png"];
    [sprite setPosition:CGPointMake(self.frame.size.width/2, self.frame.size.height/2)];
    [sprite setScale:0.6];
    [sprite setZPosition:1];
    [sprite setShadowCastBitMask:1];
    [sprite setLightingBitMask:1];
    [self addChild:sprite];
}

Next, let's add the SKLightNode object. You should take special attention to the categoryBitMask property. If you set it to 1, this light will interact with every sprite. Name it light and set zPosition to 1.

The complete snippet for the SKLightNode should look like this:

SKLightNode* light = [[SKLightNode alloc] init];
[light setName:@"light"];
[light setPosition:CGPointMake(100, 100)];
[light setCategoryBitMask:1];
[light setFalloff:1.5];
[light setZPosition:1];
[light setAmbientColor:[UIColor whiteColor]];
[light setLightColor:[[UIColor alloc] initWithRed:1.0 green:0.0 blue:0.0 alpha:.5]];
[light setShadowColor:[[UIColor alloc] initWithRed:0.9 green:0.25 blue:0.0 alpha:.5]];
[self addChild:light];

Step 4: Change the Light Location

At this point you have a second light. But let's add some user interaction. For that you should add the touchesMoved:withEvent: method and change the light position, taking into consideration the tap location.

-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {    
    for (UITouch *touch in touches) {
        CGPoint location = [touch locationInNode:self];
        [self childNodeWithName:@"light"].position = CGPointMake(location.x, location.y);
    }
}

Finally, build and run your application. Tap the Lighting button and you should see something similar to the below screenshot:

Complete lighting example

Conclusion

This concludes the first tutorial in our two-part series on the new SpriteKit framework features introduced in iOS 8. In this part, you learned to create custom shaders and lighting effects using both the SpriteKit Scene editor and through code. If you have any questions or comments, as always, feel free to drop a line in the comments.

2014-12-17T15:15:44.000Z2014-12-17T15:15:44.000ZOrlando Pereira

Swift from Scratch: Collections and Tuples

$
0
0

In the previous article, you learned about variables, constants, and some of the common data types, such as integers, floats, and strings. In this article, we zoom in on collections. Swift's standard library defines two collection types, arrays and dictionaries. Let's start with arrays.

1. Arrays

If you're familiar with Objective-C, JavaScript, or PHP, then you won't find it difficult to wrap your head around the concept of arrays. An array is an ordered, zero-indexed collection of values. There are, however, a few important differences.

Type

The first important difference with arrays in Objective-C is that the values stored in an array are always of the same type. At first, this may seem like a significant limitation, but it actually isn't. In fact, this limitation has an important advantage. We know exactly what type we get back when we ask the array for one of its values.

Another key difference with Objective-C arrays is the type of values an array can store. In Objective-C, an array can only store values of a class type. Swift doesn't have this limitation. An array in Swift can store strings, integers, floats as well as class instances. How this works and why this is possible in Swift will become clear later in this series when we cover classes and structures.

Declaration

While creating an array can be done several ways, you need to keep in mind that Swift needs to know what type of values you plan to store in the array. Create a new playground in Xcode like we did in the previous article and add the following lines to your playground.

var array1: Array<String>
var array2: [String]
var array3 = ["Apple", "Pear", "Orange"]

The second line is shorthand for the first line. The square brackets wrapping the String keyword tell Swift that we're declaring an array that can only contain String objects.

You could read the first line of code as "We declare a variable named array1of type Array that can only contain String objects." Note that I italicized of type since that is what the colon signifies.

The third line shows us how to initialize an array using an array literal. Array literals look very similar to array literals in Objective-C. The main difference is the absence of the @ symbol preceding the square brackets and the string literals.

There's also a fancy way to initialize an array with a predefined number of default values. The syntax may be confusing at first, but take a moment to let it sink in.

var a = [String](count: 5, repeatedValue: "Test")

The resulting array contains five strings, with each string being equal to "Test". To better understand the above initialization, take a look at the following two lines of code in which we initialize an empty array of strings.

var b = Array<String>()
var c = [String]()

Don't worry if you're still confused. We'll explore the syntax in more detail once we start dealing with classes and functions. In this article, we're only focusing on collections.

Mutability

One aspect of Swift that you'll quickly come to appreciate is how to declare mutable collections. The above code snippet, for example, declares three mutable arrays. A mutable array is defined by using the var keyword. It's that simple. If you don't want an array to be mutable, then use the let keyword instead. Swift aims to be intuitive and easy to use, and Swift's implementation of mutability is a perfect example of this goal.

Getting and Setting Values

To access the values stored in an array, we use the same subscript syntax as in Objective-C. In the following example, we ask array3 for its second element, the string "pear".

array3[1]

Replacing the value stored at index 1 is as simple as assigning a new value using the same subscript syntax. In the following example, we replace "pear" at index 1 with "peach".

array3[1] = "Peach"

This is only possible because the array is mutable, that is, we used the var keyword to declare the array. Mutating a constant array isn't possible. There are more advanced techniques for manipulating the contents of an array, but the underlying concept is the same.

Merging two arrays is as simple as adding them together. In the following example, we declare and merge two immutable arrays. Note that the resulting array, c, doesn't need to be mutable for this to work.

let a = [1, 2, 3]
let b = [4, 5, 6]

let c = a + b

However, it is key that the values stored in a and b are of the same type. The reason should be obvious by now. The values stored in an array need to be of the same type. The following example will result in an error.

let a = [1, 2, 3]
let b = [1.5, 5.2, 6.3]

let c = a + b

To append an array to a mutable array, we use the += operator. Note that the operand on the right is an array. This operation wouldn't work if we removed the square brackets surrounding 4.

var a = [1, 2, 3]
a += [4]

Operations

Arrays are objects on which you can perform a wide range of operations. Arrays expose a number of functions or methods. To invoke a method on an object, you use the dot notation. Add the following line to your playground to add an item to array3.

array3.append("Cherry")

Let's see how many items array3 contains by invoking its count method. This outputs 4 in the results pane on the right.

array3.count

It's also possible to insert an item at a specific index by invoking the array's insert method as shown below. The insert method accepts more than one parameter and it may look a bit odd at first.

array3.insert("Prune", atIndex: 2)

Like Objective-C, Swift supports named parameters to improve readability. The result is that code is easier to read and understand, and functions or methods don't need much explaining in terms of what they do. It is clear, for example, that the insert method inserts an element at index 2.

While Swift is more concise and less verbose than Objective-C, it does support named parameters. If you're coming from PHP, Ruby, or JavaScript, then this is certainly something that will take some getting used to.

Convenience Methods

What I really enjoy about Swift are the Ruby-like convenience properties and methods of Swift's standard library. An array, for example, has an isEmpty property that tells you if the array contains any elements. This is nothing more than shorthand for checking the array's count property. The result, however, is code that is more concise and easier to read.

array3.isEmpty

2. Dictionaries

Dictionaries behave very similar to dictionaries in Objective-C. A dictionary stores an unordered collection of values. Each value in the dictionary is associated with a key. In other words, a dictionary stores a number of key/value pairs.

Type

As with arrays, the keys and values stored in a dictionary need to be of the same type. This means that if you ask a dictionary for the value of a particular key, you know what type the dictionary will return.

Declaration

Declaring a dictionary is similar to declaring an array. The difference is that you need to specify the type for both keys and values. The following example shows three ways to declare a dictionary.

var dictionary1: Dictionary<String, Int>
var dictionary2: [String: Int]
var dictionary3 = ["Apple": 3, "Pear": 8, "Orange": 11]

The second line is shorthand for the first line. The keys of these dictionaries need to be of type String while the values are expected to be of type Int. The var keyword indicates that the dictionaries are mutable.

You could read the first line of code as "We declare a variable named dictionary1 of typeDictionary that can only contain keys of type String and values of type Int."

The third line illustrates how we can initialize a dictionary using a dictionary literal. This is similar to the syntax we use in Objective-C, but note that the curly braces are replaced by square brackets and the literal isn't prefixed with an @ symbol.

Getting and Setting Values

Accessing values is similar to accessing values of an array. The only difference is that you use the key instead of the index of the value you need to access. The following example illustrates this.

let value = dictionary3["Apple"]
println(value)

You'll notice that Xcode tells us that the value of value isn't 3, but Optional(3). What does this mean? Swift uses optionals to wrap values that can be one of two things, a value or nil. Don't worry about optionals at this point. We're going to focus on optionals in the next article of this series. Let me just tell you that optionals are another key concept of the Swift programming language.

It's interesting to point out that the syntax to access a value of a dictionary is identical to that of arrays if the keys of the dictionary are of type Int. Take a look at the following example to see what I mean.

var dictionary4 = [0: "Apple", 1: "Pear", 2: "Orange"]
let fruit = dictionary4[0]

Operations

As with arrays, the Swift standard library defines a wide range of operations you can perform on dictionaries. A dictionary returns its number of key/value pairs by calling count on it. Removing a key/value pair is easy and intuitive as the next example illustrates. Of course, this is only possible if the dictionary is mutable.

dictionary4.removeValueForKey(0)

When you start learning Swift, you'll occasionally run into code snippets that look odd or confusing. Take a look at the following line in which we first declare a dictionary and then remove its key/value pairs.

var dictionary = [String: Int]()

dictionary["Oranges"] = 2
dictionary["Apples"] = 10
dictionary["Pears"] = 5

dictionary = [:]

You have to admit that the last line looks a bit odd. Because Swift knows the types of the keys and values that can be stored in dictionary, emptying the dictionary is as simple as assigning an empty dictionary to it.

There's no need to specify the types for the keys and values in this case, because we already did when we declare the dictionary in the first line. This points out another important detail, that is, the type of values you can store in arrays and dictionaries cannot change once the collection is declared.

3. Tuples

You are going to love tuples. While tuples aren't collections, they also group multiple values. Similar to arrays and dictionaries, tuples can contain values of any type. The key difference, however, is that the values stored in a tuple don't need to be of the same type. Let's look at an example to explain this in more detail.

var currency = ("EUR", 0.81)
var time = (NSDate(), "This is my message.")
var email = ("Bart Jacobs", "bart@example.com")

The first example declares a tuple named currency that is of type (String, Int). The second tuple, time, contains an NSDate instance and a string literal. The values stored in email are both of type String, which means email is of type (String, String).

Accessing Values

Indexes

To access a value stored in a tuple, you use the index that corresponds with the value you're interested in.

var rate = currency.1
var message = time.1
var name = email.0

Xcode shows us the indexes of each value stored in a tuple in the results pane of the playground on the right.

Names

To improve readability, you can name the values stored in a tuple. The result is that you can access the values of the tuple through their names instead of their indexes. Declaring a tuple is slightly different in that case.

var currency = (name: "EUR", rate: 0.81)
let currencyName = currency.name
let currencyRate = currency.rate

Decomposition

There's a second, more elegant way to work with the values stored in a tuple. Take a look at the following example in which we decompose the contents of currency.

let (currencyName, currencyRate) = currency

The value of currency at index 0 is stored in currencyName and the value at index 1 is stored in currencyRate. There's no need to specify the type for currencyName and currencyRate since Swift infers the type from the values stored in currency. In other words, currencyName is of type String and currencyRate is of type Float.

If you're only interested in specific values of a tuple, you can use an underscore to tell Swift which values you're not interested in.

let (currencyName, _) = currency

Conclusion

Arrays and dictionaries are fundamental components of almost every programming language and Swift is no different. While collections behave a little differently in Swift, it doesn't take long to become familiar with Swift's collection types if you've worked with arrays and dictionaries in other programming languages. In the next tutorial, we explore optionals and control flow.

2014-12-19T17:45:10.000Z2014-12-19T17:45:10.000ZBart Jacobs

Swift from Scratch: Collections and Tuples

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-22832

In the previous article, you learned about variables, constants, and some of the common data types, such as integers, floats, and strings. In this article, we zoom in on collections. Swift's standard library defines two collection types, arrays and dictionaries. Let's start with arrays.

1. Arrays

If you're familiar with Objective-C, JavaScript, or PHP, then you won't find it difficult to wrap your head around the concept of arrays. An array is an ordered, zero-indexed collection of values. There are, however, a few important differences.

Type

The first important difference with arrays in Objective-C is that the values stored in an array are always of the same type. At first, this may seem like a significant limitation, but it actually isn't. In fact, this limitation has an important advantage. We know exactly what type we get back when we ask the array for one of its values.

Another key difference with Objective-C arrays is the type of values an array can store. In Objective-C, an array can only store values of a class type. Swift doesn't have this limitation. An array in Swift can store strings, integers, floats as well as class instances. How this works and why this is possible in Swift will become clear later in this series when we cover classes and structures.

Declaration

While creating an array can be done several ways, you need to keep in mind that Swift needs to know what type of values you plan to store in the array. Create a new playground in Xcode like we did in the previous article and add the following lines to your playground.

var array1: Array<String>
var array2: [String]
var array3 = ["Apple", "Pear", "Orange"]

The second line is shorthand for the first line. The square brackets wrapping the String keyword tell Swift that we're declaring an array that can only contain String objects.

You could read the first line of code as "We declare a variable named array1of type Array that can only contain String objects." Note that I italicized of type since that is what the colon signifies.

The third line shows us how to initialize an array using an array literal. Array literals look very similar to array literals in Objective-C. The main difference is the absence of the @ symbol preceding the square brackets and the string literals.

There's also a fancy way to initialize an array with a predefined number of default values. The syntax may be confusing at first, but take a moment to let it sink in.

var a = [String](count: 5, repeatedValue: "Test")

The resulting array contains five strings, with each string being equal to "Test". To better understand the above initialization, take a look at the following two lines of code in which we initialize an empty array of strings.

var b = Array<String>()
var c = [String]()

Don't worry if you're still confused. We'll explore the syntax in more detail once we start dealing with classes and functions. In this article, we're only focusing on collections.

Mutability

One aspect of Swift that you'll quickly come to appreciate is how to declare mutable collections. The above code snippet, for example, declares three mutable arrays. A mutable array is defined by using the var keyword. It's that simple. If you don't want an array to be mutable, then use the let keyword instead. Swift aims to be intuitive and easy to use, and Swift's implementation of mutability is a perfect example of this goal.

Getting and Setting Values

To access the values stored in an array, we use the same subscript syntax as in Objective-C. In the following example, we ask array3 for its second element, the string "pear".

array3[1]

Replacing the value stored at index 1 is as simple as assigning a new value using the same subscript syntax. In the following example, we replace "pear" at index 1 with "peach".

array3[1] = "Peach"

This is only possible because the array is mutable, that is, we used the var keyword to declare the array. Mutating a constant array isn't possible. There are more advanced techniques for manipulating the contents of an array, but the underlying concept is the same.

Merging two arrays is as simple as adding them together. In the following example, we declare and merge two immutable arrays. Note that the resulting array, c, doesn't need to be mutable for this to work.

let a = [1, 2, 3]
let b = [4, 5, 6]

let c = a + b

However, it is key that the values stored in a and b are of the same type. The reason should be obvious by now. The values stored in an array need to be of the same type. The following example will result in an error.

let a = [1, 2, 3]
let b = [1.5, 5.2, 6.3]

let c = a + b

To append an array to a mutable array, we use the += operator. Note that the operand on the right is an array. This operation wouldn't work if we removed the square brackets surrounding 4.

var a = [1, 2, 3]
a += [4]

Operations

Arrays are objects on which you can perform a wide range of operations. Arrays expose a number of functions or methods. To invoke a method on an object, you use the dot notation. Add the following line to your playground to add an item to array3.

array3.append("Cherry")

Let's see how many items array3 contains by invoking its count method. This outputs 4 in the results pane on the right.

array3.count

It's also possible to insert an item at a specific index by invoking the array's insert method as shown below. The insert method accepts more than one parameter and it may look a bit odd at first.

array3.insert("Prune", atIndex: 2)

Like Objective-C, Swift supports named parameters to improve readability. The result is that code is easier to read and understand, and functions or methods don't need much explaining in terms of what they do. It is clear, for example, that the insert method inserts an element at index 2.

While Swift is more concise and less verbose than Objective-C, it does support named parameters. If you're coming from PHP, Ruby, or JavaScript, then this is certainly something that will take some getting used to.

Convenience Methods

What I really enjoy about Swift are the Ruby-like convenience properties and methods of Swift's standard library. An array, for example, has an isEmpty property that tells you if the array contains any elements. This is nothing more than shorthand for checking the array's count property. The result, however, is code that is more concise and easier to read.

array3.isEmpty

2. Dictionaries

Dictionaries behave very similar to dictionaries in Objective-C. A dictionary stores an unordered collection of values. Each value in the dictionary is associated with a key. In other words, a dictionary stores a number of key/value pairs.

Type

As with arrays, the keys and values stored in a dictionary need to be of the same type. This means that if you ask a dictionary for the value of a particular key, you know what type the dictionary will return.

Declaration

Declaring a dictionary is similar to declaring an array. The difference is that you need to specify the type for both keys and values. The following example shows three ways to declare a dictionary.

var dictionary1: Dictionary<String, Int>
var dictionary2: [String: Int]
var dictionary3 = ["Apple": 3, "Pear": 8, "Orange": 11]

The second line is shorthand for the first line. The keys of these dictionaries need to be of type String while the values are expected to be of type Int. The var keyword indicates that the dictionaries are mutable.

You could read the first line of code as "We declare a variable named dictionary1 of typeDictionary that can only contain keys of type String and values of type Int."

The third line illustrates how we can initialize a dictionary using a dictionary literal. This is similar to the syntax we use in Objective-C, but note that the curly braces are replaced by square brackets and the literal isn't prefixed with an @ symbol.

Getting and Setting Values

Accessing values is similar to accessing values of an array. The only difference is that you use the key instead of the index of the value you need to access. The following example illustrates this.

let value = dictionary3["Apple"]
println(value)

You'll notice that Xcode tells us that the value of value isn't 3, but Optional(3). What does this mean? Swift uses optionals to wrap values that can be one of two things, a value or nil. Don't worry about optionals at this point. We're going to focus on optionals in the next article of this series. Let me just tell you that optionals are another key concept of the Swift programming language.

It's interesting to point out that the syntax to access a value of a dictionary is identical to that of arrays if the keys of the dictionary are of type Int. Take a look at the following example to see what I mean.

var dictionary4 = [0: "Apple", 1: "Pear", 2: "Orange"]
let fruit = dictionary4[0]

Operations

As with arrays, the Swift standard library defines a wide range of operations you can perform on dictionaries. A dictionary returns its number of key/value pairs by calling count on it. Removing a key/value pair is easy and intuitive as the next example illustrates. Of course, this is only possible if the dictionary is mutable.

dictionary4.removeValueForKey(0)

When you start learning Swift, you'll occasionally run into code snippets that look odd or confusing. Take a look at the following line in which we first declare a dictionary and then remove its key/value pairs.

var dictionary = [String: Int]()

dictionary["Oranges"] = 2
dictionary["Apples"] = 10
dictionary["Pears"] = 5

dictionary = [:]

You have to admit that the last line looks a bit odd. Because Swift knows the types of the keys and values that can be stored in dictionary, emptying the dictionary is as simple as assigning an empty dictionary to it.

There's no need to specify the types for the keys and values in this case, because we already did when we declare the dictionary in the first line. This points out another important detail, that is, the type of values you can store in arrays and dictionaries cannot change once the collection is declared.

3. Tuples

You are going to love tuples. While tuples aren't collections, they also group multiple values. Similar to arrays and dictionaries, tuples can contain values of any type. The key difference, however, is that the values stored in a tuple don't need to be of the same type. Let's look at an example to explain this in more detail.

var currency = ("EUR", 0.81)
var time = (NSDate(), "This is my message.")
var email = ("Bart Jacobs", "bart@example.com")

The first example declares a tuple named currency that is of type (String, Int). The second tuple, time, contains an NSDate instance and a string literal. The values stored in email are both of type String, which means email is of type (String, String).

Accessing Values

Indexes

To access a value stored in a tuple, you use the index that corresponds with the value you're interested in.

var rate = currency.1
var message = time.1
var name = email.0

Xcode shows us the indexes of each value stored in a tuple in the results pane of the playground on the right.

Names

To improve readability, you can name the values stored in a tuple. The result is that you can access the values of the tuple through their names instead of their indexes. Declaring a tuple is slightly different in that case.

var currency = (name: "EUR", rate: 0.81)
let currencyName = currency.name
let currencyRate = currency.rate

Decomposition

There's a second, more elegant way to work with the values stored in a tuple. Take a look at the following example in which we decompose the contents of currency.

let (currencyName, currencyRate) = currency

The value of currency at index 0 is stored in currencyName and the value at index 1 is stored in currencyRate. There's no need to specify the type for currencyName and currencyRate since Swift infers the type from the values stored in currency. In other words, currencyName is of type String and currencyRate is of type Float.

If you're only interested in specific values of a tuple, you can use an underscore to tell Swift which values you're not interested in.

let (currencyName, _) = currency

Conclusion

Arrays and dictionaries are fundamental components of almost every programming language and Swift is no different. While collections behave a little differently in Swift, it doesn't take long to become familiar with Swift's collection types if you've worked with arrays and dictionaries in other programming languages. In the next tutorial, we explore optionals and control flow.

2014-12-19T17:45:10.000Z2014-12-19T17:45:10.000ZBart Jacobs

Create a YouTube Client on Android

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-22858
Final product image
What You'll Be Creating

There are a lot of popular third party YouTube clients on Google Play, such as Viral Popup and PlayTube, that manage to offer some unique and additional functionality that the official YouTube app doesn't. If you want to build one such app yourself, this tutorial is for you.

In this tutorial, we create our own YouTube client that cannot only search for videos on YouTube, but also play them. Doing so, we will learn how to make use of the YouTube Android Player API and the YouTube Data API client library for Java.

Prerequisites

Ensure that you have the latest Eclipse ADT Bundle set up. You can download it at the Android Developer website.

You must also have a developer key to use the YouTube API. Follow the steps on Google's YouTube Developer website to get one.

1. Create a New Project

Fire up Eclipse and create a new Android application. Name the application, SimplePlayer. Choose a unique package name, and set the minimum required SDK to Android 2.2 and the target SDK to Android 4.X (L Preview).

We're going to create the Activity ourselves so deselect Create Activity and click Finish.

2. Adding Libraries

Step 1: Download Libraries

You will need the following libraries for this project:

  • YouTube Android Player API: This library lets your app embed and control YouTube videos seamlessly. At the time of writing, the latest version of this library is 1.0.0. You can download it from the Google Developers website.
  • YouTube Data API v3 Client Library for Java: This library lets your app query information on YouTube. We are going to use it to enable our app to search for videos on YouTube. This is also available on the Google Developers website.
  • Picasso: This library makes it easy to fetch and display remote images. We are going to use it to fetch thumbnails of YouTube videos. The latest version currently is 2.4.0 and you can download it directly from the Maven repository.

Step 2: Add Libraries

To add the YouTube Android Player API, unzip YouTubeAndroidPlayerApi-1.0.0.zip and copy the file YouTubeAndroidPlayerApi.jar to the libs folder of your project.

To add the YouTube Data API v3 library and its dependencies, unzip google-api-services-youtube-v3-rev124-java-1.19.0.zip and copy the following files to the libs folder of your project:

  • google-api-services-youtube-v3-rev124-1.19.0.jar
  • google-api-client-1.19.0.jar
  • google-oauth-client-1.19.0.jar
  • google-http-client-1.19.0.jar
  • jsr305-1.3.9.jar
  • google-http-client-jackson2-1.19.0.jar
  • jackson-core-2.1.3.jar
  • google-api-client-android-1.19.0.jar
  • google-http-client-android-1.19.0.jar

Finally, to add Picasso, copy picasso-2.4.0.jar to the libs folder of your project.

3. Edit the Manifest

The only permission our app needs is android.permission.INTERNET to access YouTube's servers. Add the following to AndroidManifest.xml:

<uses-permission android:name="android.permission.INTERNET"/>

Our app has two activities, one to search for videos and one to play them. To avoid having to handle orientation changes in this tutorial, we force both the activities to only use landscape mode. Declare the activities in the manifest by adding the following code to it:

<activity android:name=".SearchActivity"
    android:screenOrientation="landscape"><intent-filter><action android:name="android.intent.action.MAIN"/><category android:name="android.intent.category.LAUNCHER"/></intent-filter></activity><activity android:name=".PlayerActivity"
    android:screenOrientation="landscape" />

4. Edit strings.xml

The res/values/strings.xml file contains the strings that our app uses. Update its contents as shown below:

<resources><string name="app_name">SimplePlayer</string><string name="search">Search</string><string name="failed">Failed to initialize Youtube Player</string></resources>

5. Create Layout for SearchActivity

Step 1: Create Layout

SearchActivity needs the following views:

  • EditText: to allow the user to type in the search keywords
  • ListView: to display the search results
  • LinearLayout: this view serves as the parent view of the aforementioned views

Create a new file named layout/activity_search.xml and add the following code to it:

<?xml version="1.0" encoding="utf-8"?><LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:orientation="vertical" ><EditText
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:hint="@string/search"
        android:id="@+id/search_input"
        android:singleLine="true"
        /><ListView
        android:id="@+id/videos_found"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:dividerHeight="5dp"
        /></LinearLayout>

Step 2: Layout Out Search Results

Each search result refers to a video on YouTube and we need a layout to display information about that video. Therefore, each item of the ListView needs to contain the following views:

  • ImageView: to display the thumbnail of the video
  • TextView: to display the title of the video
  • TextView: to display the description of the video
  • RelativeLayout: this view acts as the parent view of the other views

Create a file named layout/video_item.xml and add the following code to it:

<?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:padding="16dp"><ImageView 
        android:id="@+id/video_thumbnail"
        android:layout_width="128dp"
        android:layout_height="128dp"
        android:layout_alignParentLeft="true"
        android:layout_alignParentTop="true"
        android:layout_marginRight="20dp"
        /><TextView android:id="@+id/video_title"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_toRightOf="@+id/video_thumbnail"
        android:layout_alignParentTop="true"
        android:layout_marginTop="5dp"
        android:textSize="25sp"
        android:textStyle="bold"
        /><TextView android:id="@+id/video_description"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_toRightOf="@+id/video_thumbnail"
        android:layout_below="@+id/video_title"
        android:textSize="15sp"        
        /></RelativeLayout>

6. Create Layout for PlayerActivity

Step 1: Create Layout

PlayerActivity needs the following views:

  • YouTubePlayerView: to play YouTube videos
  • LinearLayout: this view is the parent view of YouTubePlayerView

Create a new file named layout/activity_player.xml and add the following code to it:

<?xml version="1.0" encoding="utf-8"?><LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:orientation="vertical" ><com.google.android.youtube.player.YouTubePlayerView        
        android:id="@+id/player_view" 
        android:layout_width="match_parent"
        android:layout_height="wrap_content"/></LinearLayout>

7. Create VideoItem

Create a new Java class named VideoItem.java. We use this class to store the following information about a YouTube video:

  • YouTube ID
  • title
  • description
  • thumbnail URL

All of the above are stored as strings. After adding the getters and setters for each of them, VideoItem.java file should have the following contents:

package com.hathi.simpleplayer;

public class VideoItem {
    private String title;
	private String description;
	private String thumbnailURL;
	private String id;
	
	public String getId() {
		return id;
	}
	
	public void setId(String id) {
		this.id = id;
	}
	
	public String getTitle() {
		return title;
	}
	
	public void setTitle(String title) {
		this.title = title;
	}
	
	public String getDescription() {
		return description;
	}
	
	public void setDescription(String description) {
		this.description = description;
	} 
	
	public String getThumbnailURL() {
		return thumbnailURL;
	}
	
	public void setThumbnailURL(String thumbnail) {
		this.thumbnailURL = thumbnail;		
	}
		
}

8. Create a Helper Class

To avoid having to deal with the YouTube Data API directly in our Activity, create a new Java class and name it YoutubeConnector.java. This class has the following member variables:

  • an instance of the YouTube class that will be used for communicating with the YouTube API
  • an instance of YouTube.Search.List to represent a search query
  • the YouTube API key as a static String

We initialize the above variables in the constructor. To initialize the instance of YouTube, the YouTube.Builder class has to be used. The classes that will be responsible for the network connection and the JSON processing are passed to the builder.

Once initialized, its search method is used to create a search request. The list method is then used to mention the details we want in the search results. For this tutorial, we are going to need an id and snippet for each search result. From those, we extract the following fields:

  • id/videoId
  • snippet/title
  • snippet/description
  • snippet/thumbnails/default/url

The developer's API key needs to be sent with every search request. The setKey method is used for this purpose. We also use the setType method to restrict the search results to videos only. At this point, the class should look something like this:

package com.hathi.simpleplayer;

public class YoutubeConnector {
    private YouTube youtube; 
	private YouTube.Search.List query;
	
	// Your developer key goes here
	public static final String KEY 
		= "AIzaSQZZQWQQWMGziK9H_qRxz8g-V6eDL3QW_Us";
	
	public YoutubeConnector(Context context) { 
		youtube = new YouTube.Builder(new NetHttpTransport(), 
				new JacksonFactory(), new HttpRequestInitializer() {			
			@Override
			public void initialize(HttpRequest hr) throws IOException {}
		}).setApplicationName(content.getString(R.string.app_name)).build();
		
		try{
			query = youtube.search().list("id,snippet");
			query.setKey(KEY);			
			query.setType("video");
			query.setFields("items(id/videoId,snippet/title,snippet/description,snippet/thumbnails/default/url)");								
		}catch(IOException e){
			Log.d("YC", "Could not initialize: "+e);
		}
	}
}

Next, we create a method named search to perform the search based on the user's keywords. This method accepts the keywords as a String parameter. The query variable's setQ method is used to set the keywords.

We then run the query using its execute method. The results are returned in the form of a SearchListResponse instance. We iterate through the result items and create a new List of VideoItem objects, which will be the return value of this method. After adding appropriate error handling, the search method should look like this:

public List<VideoItem> search(String keywords){
    query.setQ(keywords);		
	try{
		SearchListResponse response = query.execute();
		List<SearchResult> results = response.getItems();
		List<VideoItem> items = new ArrayList<VideoItem>();
		for(SearchResult result:results){
			VideoItem item = new VideoItem();
			item.setTitle(result.getSnippet().getTitle());
			item.setDescription(result.getSnippet().getDescription());
			item.setThumbnailURL(result.getSnippet().getThumbnails().getDefault().getUrl());
			item.setId(result.getId().getVideoId());
			items.add(item);			
		}
		return items;
	}catch(IOException e){
		Log.d("YC", "Could not search: "+e);
		return null;
	}		
}

9. Create SearchActivity

Create a new class named SearchActivity.java. This class has fields that represent the views we mentioned in activity_search.xml. It also has a Handler to make updates on the user interface thread.

In the onCreate method, we initialize the views and add an OnEditorActionListener to the EditText to know when the user has finished entering keywords.

public class SearchActivity extends Activity {

    private EditText searchInput;
	private ListView videosFound;
	
	private Handler handler;
	
	@Override
	protected void onCreate(Bundle savedInstanceState) {	
		super.onCreate(savedInstanceState);
		setContentView(R.layout.activity_search);
		
		searchInput = (EditText)findViewById(R.id.search_input);
		videosFound = (ListView)findViewById(R.id.videos_found); 
		
		handler = new Handler();
		
		searchInput.setOnEditorActionListener(new TextView.OnEditorActionListener() {			
			@Override
			public boolean onEditorAction(TextView v, int actionId, KeyEvent event) {			
				if(actionId == EditorInfo.IME_ACTION_DONE){
					searchOnYoutube(v.getText().toString());
					return false;
				}
				return true;
			}
		});
				
	}
}

You must have noticed the call to the searchOnYoutube method. Let's define the method now. In this method, we create a new Thread to initialize a YoutubeConnector instance and run its search method. A new thread is necessary, because network operations cannot be performed on the the main user interface thread. If you forget to do this, you will face a runtime exception. Once the results are available, the handler is used to update the user interface.

private List<VideoItem> searchResults;

private void searchOnYoutube(final String keywords){
    	new Thread(){
			public void run(){
				YoutubeConnector yc = new YoutubeConnector(SearchActivity.this);
				searchResults = yc.search(keywords);				
				handler.post(new Runnable(){
					public void run(){
						updateVideosFound();
					}
				});
			}
		}.start();
	}

In the updateVideosFound method, we generate an ArrayAdapter and pass it on to the ListView to display the search results. In the getView method of the adapter, we inflate the video_item.xml layout and update its views to display information about the search result.

The Picasso library's load method is used to fetch the thumbnail of the video and the into method is used to pass it to the ImageView.

private void updateVideosFound(){
	ArrayAdapter<VideoItem> adapter = new ArrayAdapter<VideoItem>(getApplicationContext(), R.layout.video_item, searchResults){
		@Override
		public View getView(int position, View convertView, ViewGroup parent) {
			if(convertView == null){
				convertView = getLayoutInflater().inflate(R.layout.video_item, parent, false);
			}
			ImageView thumbnail = (ImageView)convertView.findViewById(R.id.video_thumbnail);
			TextView title = (TextView)convertView.findViewById(R.id.video_title);
			TextView description = (TextView)convertView.findViewById(R.id.video_description);
			VideoItem searchResult = searchResults.get(position);
			Picasso.with(getApplicationContext()).load(searchResult.getThumbnailURL()).into(thumbnail);
			title.setText(searchResult.getTitle());
			description.setText(searchResult.getDescription());
			return convertView;
		}
	};			
	videosFound.setAdapter(adapter);
}

Finally, we need a method that sets the OnItemClickListener of the ListView so that the user can click on a search result and watch the corresponding video. Let's name this method addClickListener and call it at the end of the onCreate method.

When an item in the list is tapped, we create a new Intent for the PlayerActivity and pass in the ID of the video. Once the Intent is created, the startActivity method is used to launch the PlayerActivity.

private void addClickListener(){
    videosFound.setOnItemClickListener(new AdapterView.OnItemClickListener() {

		@Override
		public void onItemClick(AdapterView<?> av, View v, int pos,
				long id) {				
			Intent intent = new Intent(getApplicationContext(), PlayerActivity.class);
			intent.putExtra("VIDEO_ID", searchResults.get(pos).getId());
			startActivity(intent);
		}
	});
}

10. Create PlayerActivity

Create a new Java class named PlayerActivity.java that inherits from YouTubeBaseActivity. This is important, because only subclasses of the YouTubeBaseActivity can make use of the YouTubePlayerView.

This class has a single member variable that represents the YouTubePlayerView we mentioned in the activity_player.xml layout file. This is initialized in the onCreate method by invoking the initialize method of the YouTubePlayerView class, passing in the developer API key.

Next, our class needs to implement the OnInitializedListener interface to know when the initialization is complete. The interface has two methods, named onInitializationFailure and onInitializationSuccess.

In case of success, the cueVideo method is used to display the YouTube video. In case of failure, a Toast is shown that tells the user that the initialization failed.

This is what the PlayerActivity class should look like:

public class PlayerActivity extends YouTubeBaseActivity implements OnInitializedListener {
	private YouTubePlayerView playerView;
	@Override
	protected void onCreate(Bundle bundle) {
		super.onCreate(bundle);
		setContentView(R.layout.activity_player);

	    playerView = (YouTubePlayerView)findViewById(R.id.player_view);
	    playerView.initialize(YoutubeConnector.KEY, this);	    	   
	}

	@Override
	public void onInitializationFailure(Provider provider,
			YouTubeInitializationResult result) {
		Toast.makeText(this, getString(R.string.failed), Toast.LENGTH_LONG).show();
	}

	@Override
	public void onInitializationSuccess(Provider provider, YouTubePlayer player,
			boolean restored) {
		if(!restored){			
			player.cueVideo(getIntent().getStringExtra("VIDEO_ID"));
		}
	}
}

11. Compile and Run

Our YouTube client is now ready to be deployed to an Android device. Almost all popular Android devices have it installed, but make sure the YouTube app is installed—and up to date—on the device, because our app depends on it.

Once deployed, you should be able to type in a query to search for videos on YouTube and then click on a result to start playing the corresponding video.

Conclusion

You now know how to embed YouTube videos in your Android app. You have also learned how to use the Google API client library and interact with YouTube. The Android Player API provides a lot of methods to control the playback of the videos and you can use them to come up with very creative apps. Refer to the complete reference guide to learn more about the API.

2014-12-22T16:45:24.000Z2014-12-22T16:45:24.000ZAshraff Hathibelagal

Tuts+ is Hiring Android & Java Course Instructors

$
0
0

Are you an experienced Android developer looking for the next step in your career? Have you considered sharing your knowledge through teaching comprehensive online video courses?

We're growing our Code course topics here at Tuts+, expanding into teaching more mobile content areas, including Android. It's been a while since we've covered Java so we're hoping to work on some new Java content too! This is a unique opportunity to become part of the core Tuts+ instructor team.

What We're Looking For

  • You have extensive experience working with both Java and Android SDK and possibly other technologies.
  • You have some experience and are comfortable with mentoring, teaching or screencasting.
  • You love learning and are constantly expanding your knowledge of new technologies.

What Is a Course?

Tuts+ courses provide students with in-depth video training on a specific topic. Here’s how the process works:

  • Create 90+ minutes of video-based teaching.
  • Present with screencasts and slides.
  • Organize your course into chapters and bite-size lessons.
  • Teach skills comprehensively from start to finish.

Why Teach for Tuts+?

  • Work from home in your own time.
  • Give back to the community and share your skills teaching others.
  • Get paid a competitive per course rate of $3,500 USD.
  • We'll provide you with curriculum and screencasting setup support.

If you've created online video before we'd love to hear about it, but it's not essential. We'll get you set up with equipment, and we have a team of people ready to help you get started.

Interested? Get in Touch

  1. Prepare a short application video following the instructions below.
  2. Complete our application form answering a few questions about why you’d be suitable as a Tuts+ instructor, and include an accessible link to your application video in the Screencast Video section.

Special Instructions

In three minutes or less, give us a whirlwind tour of an interesting Android library.

We’re not concerned with audio quality at this point, or which library you choose. We’d just like to see how you teach.

Applications close on 14 January 2015, but the sooner you can get your application in, the better! We'll be reaching out to promising candidates as quickly as possible. 

2014-12-23T09:42:46.000Z2014-12-23T09:42:46.000ZJoel Bankhead

Tuts+ is Hiring Android & Java Course Instructors

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-22929

Are you an experienced Android developer looking for the next step in your career? Have you considered sharing your knowledge through teaching comprehensive online video courses?

We're growing our Code course topics here at Tuts+, expanding into teaching more mobile content areas, including Android. It's been a while since we've covered Java so we're hoping to work on some new Java content too! This is a unique opportunity to become part of the core Tuts+ instructor team.

What We're Looking For

  • You have extensive experience working with both Java and Android SDK and possibly other technologies.
  • You have some experience and are comfortable with mentoring, teaching or screencasting.
  • You love learning and are constantly expanding your knowledge of new technologies.

What Is a Course?

Tuts+ courses provide students with in-depth video training on a specific topic. Here’s how the process works:

  • Create 90+ minutes of video-based teaching.
  • Present with screencasts and slides.
  • Organize your course into chapters and bite-size lessons.
  • Teach skills comprehensively from start to finish.

Why Teach for Tuts+?

  • Work from home in your own time.
  • Give back to the community and share your skills teaching others.
  • Get paid a competitive per course rate of $3,500 USD.
  • We'll provide you with curriculum and screencasting setup support.

If you've created online video before we'd love to hear about it, but it's not essential. We'll get you set up with equipment, and we have a team of people ready to help you get started.

Interested? Get in Touch

  1. Prepare a short application video following the instructions below.
  2. Complete our application form answering a few questions about why you’d be suitable as a Tuts+ instructor, and include an accessible link to your application video in the Screencast Video section.

Special Instructions

In three minutes or less, give us a whirlwind tour of an interesting Android library.

We’re not concerned with audio quality at this point, or which library you choose. We’d just like to see how you teach.

Applications close on 14 January 2015, but the sooner you can get your application in, the better! We'll be reaching out to promising candidates as quickly as possible. 

2014-12-23T09:42:46.000Z2014-12-23T09:42:46.000ZJoel Bankhead

iOS 8: What's New in SpriteKit, Part 2

$
0
0

This tutorial gives an overview of the new features of the SpriteKit framework that were introduced in iOS 8. The new features are designed to make it easier to support advanced game effects and include support for custom OpenGL ES fragment shaders, lighting, shadows, advanced new physics effects and animations, and integration with SceneKit. In this tutorial, you'll learn how to implement these new features.

Series Format

This series is split up into two tutorials and covers the most important new features of the SpriteKit framework. In the first part, we take a look at shaders, lighting, and shadows. In the second part, I'll talk about physics and SceneKit integration.

While each part of this series stands on its own, I recommend following along step-by-step to properly understand the new features of the SpriteKit framework. After reading both parts, you'll be able to create both simple and more advanced games using the new features of the SpriteKit framework.

Download the Xcode project we created in the previous article from GitHub if you'd like to follow along.

1. Physics

In iOS 8, SpriteKit introduced new physics features, such as per-pixel physics, constraints, inverse kinematics, and physics fields.

Kinematics is the process of calculating the 3D position of the end of a linked structure, given the angles of all the joints. Inverse Kinematics (IK) does the opposite. Given the end point of the structure, what angles do the joints need to make in order to achieve that end point? The following image clarifies these concepts.

Inverse Kinematics

With SpriteKit, you use sprites to represent joints that use the parent-child relationship to create a joint hierarchy. For each relationship, you define the inverse kinematics constraints on each joint and control the minimum and maximum rotation angle between them. Note that each joint rotates around its anchor point.

Step 1: Inverse Kinematics (IK)

Open the PhysicsSceneEditor and add the croquette-o.png sprite to the yellow rectangle. Select the sprite and change Name in the SKNode Inspector to Root. Set the Physics Definition Body Type to None.

Add a second sprite, wood.png and change its Name to FirstNode. Change the Parent field to Root. Move FirstNode by putting it on the right of Root and resize it to create a rectangle as shown below. Set the Physics Definition Body Type to None.

The result should look similar to the following scene.

Note that, when you select a sprite node, a white circle appears at its center. That circle represents the sprite node's anchor point around which every rotation is performed.

Anchor point

Step 2: Add More Sprites

Follow the previous steps and add two more sprites.

First Sprite

  • Add another croquette-o.png sprite.
  • Change its Name SecondNode.
  • Change its Parent to FirstNode.
  • Position it on the right of FirstNode.
  • Change its Physics Definition Body Type to None.

Second Sprite

  • Add another wood.png sprite.
  • Change its Name field to ThirdNode.
  • Change its Parent to SecondNode.
  • Position it on the right of SecondNode.
  • Resize it to create a rectangle.
  • Change its Physics Definition Body Type to None.

The result should look similar to the following scene.

Resulting image

Step 3: Edit and Simulate

To test the joint connections, interactions, and constraints, you don't need to build and run your project. Xcode provides two modes, edit and simulate.

The simulate mode provides a real time testing environment while the edit mode is used to create and edit your scene. So far, we've been working in the edit mode. Note that any changes you make in simulate mode are not saved.

At the bottom of the scene editor, you can see which mode you are currently working in. If the bottom bar of the scene editor is white, then you're in edit mode. A blue background indicates you're in simulate mode. Click the label in the bottom bar to switch between the two modes.

Simulate mode
Edit mode

Change the mode to simulate and select the FirstNode, SecondNode, and ThirdNode sprites. You can select multiple sprites by pressing Command.

Next, press Shift-Control-Click and move the sprites around in the scene. The result is that the sprite nodes animate and rotate. However, the rotation is weird and needs to be corrected.

Step 4: IK Constraints

Switch back to edit mode and add a few constraints to the sprite nodes. Select each sprite node and change its properties as follows.

Select the Root and SecondNode sprite nodes and set the IK ConstraintsMax Angle to 0. Select the FirstNode and ThirdNode sprite nodes and set  Anchor PointX to 0 and IK Constraints Max Angle to 90.

By modifying these properties, the position and size of the sprite nodes will change. After adding the constraints, manually adjust their size and position, and switch to simulate mode to test the new constraints we added.

The below screenshot illustrates the correct constraints configuration.

Constraints in action

Step 5: Magnetic Field Node

Magnetic fields are also new in SpriteKit. Let's see how this works by adding a magnetic field to the physics scene. Open PhysicsScene.m and an instance variable named magneticFieldNode of type SKFieldNode.

@implementation PhysicsScene {
    SKFieldNode *magneticFieldNode;
}

In the didMoveToView: method, we first configure the scene by creating a SKPhysicsBody instance for the scene and adding a gravitational force. This means that any nodes in the scene will be pulled downwards.

SKPhysicsBody *physicsBody = [SKPhysicsBody bodyWithEdgeLoopFromRect:self.frame];
[self.physicsWorld setGravity:CGVectorMake(0, -9)];
[self setPhysicsBody:physicsBody];

To configure the magneticFieldNode object, you need to configure its physics body as well as its position and strength. Note that each SKFieldNode has its own properties. The following code snippet shows how to configure the magnetic field node. We add the new node as a child node to the scene.

magneticFieldNode = [SKFieldNode magneticField];
[magneticFieldNode setPhysicsBody:[SKPhysicsBody bodyWithCircleOfRadius:80]];
[magneticFieldNode setPosition:CGPointMake(100, 100)];
[magneticFieldNode setStrength:3];
[self addChild:magneticFieldNode];

Step 6: Interactions

To see the magnetic field in action, we need to add a few nodes with which the magnetic field note can interact. In the following code snippet, we create three hundred sprites. Note that each sprite node has its own physics body and we set the affectedByGravity property to YES.

for (int i = 0; i < 300; i++) {
    SKSpriteNode *node4 = [SKSpriteNode spriteNodeWithTexture:[SKTexture textureWithImageNamed:@"wood.png"] size:CGSizeMake(25, 25)];
    [node4 setPhysicsBody:[SKPhysicsBody bodyWithRectangleOfSize:CGSizeMake(25, 25)]];
    [node4 setPosition:CGPointMake(arc4random()%640, arc4random()%950)];
    [node4.physicsBody setDynamic:YES];
    [node4.physicsBody setAffectedByGravity:YES];
    [node4.physicsBody setAllowsRotation:true]; 
    [node4.physicsBody setMass:0.9];
    [self addChild:node4];
}

The completed didMoveToView: method should look as follows:

-(void)didMoveToView:(SKView *)view {
    SKPhysicsBody *physicsBody = [SKPhysicsBody bodyWithEdgeLoopFromRect:self.frame];
    [self.physicsWorld setGravity:CGVectorMake(0, -9)];
    [self setPhysicsBody:physicsBody];
    magneticFieldNode = [SKFieldNode magneticField];
    [magneticFieldNode setPhysicsBody:[SKPhysicsBody bodyWithCircleOfRadius:80]];
    [magneticFieldNode setPosition:CGPointMake(100, 100)];
    [magneticFieldNode setStrength:3];
    [self addChild:magneticFieldNode];
    for (int i = 0; i < 300; i++) {
        SKSpriteNode *node4 = [SKSpriteNode spriteNodeWithTexture:[SKTexture textureWithImageNamed:@"wood.png"] size:CGSizeMake(25, 25)];
        [node4 setPhysicsBody:[SKPhysicsBody bodyWithRectangleOfSize:CGSizeMake(25, 25)]];
        [node4 setPosition:CGPointMake(arc4random()%640, arc4random()%950)];
        [node4.physicsBody setDynamic:YES];
        [node4.physicsBody setAffectedByGravity:YES];
        [node4.physicsBody setAllowsRotation:true];
        [node4.physicsBody setMass:0.9];
        [self addChild:node4];
    }
}

Before we build and run the application, we override the touchesMoved:withEvent: method so you can move the magnetic field node by tapping your finger.

-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
    for (UITouch *touch in touches) {
        [magneticFieldNode setPosition:[touch locationInNode:self]];
    }
}

Build and run the application to test the magnetic field node's effect on the scene. For additional information about simulating physics using the SpriteKit framework, I recommend reading Apple's documentation about this topic.

2. SceneKit Integration

SceneKit is a high-level Objective-C framework for building applications and games that use 3D graphics. It supports the import, manipulation, and rendering of 3D assets. The rendering algorithm only requires the description of your scene's contents, animations, and actions you want it to perform.

Through SceneKit, you are now able to create and deliver 3D content using SpriteKit. SceneKit has a tree structure and can be used in two ways:

  • standalone SceneKit environment
  • integrated into SpriteKit

SceneKit has a tree hierarchy composition. In a standalone SceneKit environment, the base class for the tree structure is an SCNNode instance as shown in the diagram below. An SCNNode object by itself has no visible content when the scene containing it is rendered. It simply defines a position in space that represents the position, rotation, and scale of a node relative to its parent node.

SceneKit hierarchyt

When you integrate SceneKit into a SpriteKit-based app, you need to define a SK3DNode object as the root object for your scene. This means that the core SceneKit hierarchy changes to the following:

SceneKit  SpriteKit hierarchy

Note that not every child node in the above diagram is required. You only define and configure the nodes that fit your needs. For instance, you can add an SCNLight node to illuminate the scene even if you don't include an SCNCamera node in the scene.

Step 1: Live Preview of Models

SpriteKit and SceneKit support a number of file formats for importing models. You can preview these models in real-time in Xcode. Inside the Textures folder in your project(Resources > Textures), there's a file named ship.dae. When you select this file, you're presented with a new user interface as shown below.

Xcode live preview

On the left of the editor, you can see two groups:

  • Entities: This group contains information about the predefined animations, camera position, lights, and materials defined by the model file. The file we've opened only contains information about the model's geometry and its material.
  • Scene graph: This group contains information about the original object mesh. In this case, the object was created as a whole and you only see a single mesh.

Step 2: Importing an External Model

To use SceneKit in combination with SpriteKit, you need to import the SceneKit library from the SceneKit framework. Open SceneKitScene.m and include it as shown below.

#include <SceneKit/SceneKit.h>

We're going to use the model stored in ship.dae as the 3D scene. Inside the didMoveToView: method, create the SCNScene object that loads a scene from that file.

SCNScene *shipScene = [SCNScene sceneNamed:@"ship.dae"];

Remember the tree hierarchy I mentioned earlier? To add the shipScene object to the SKScene object, two steps are required:

  • create a SK3DNode object
  • define a SceneKit scene to render

In this case, the scene to render is the shipScene. Note that you also define the node's position and its size.

SK3DNode *sk3DNodeFist = [[SK3DNode alloc] initWithViewportSize:CGSizeMake(300, 300)];
[sk3DNodeFist setPosition:CGPointMake(200,300)];
[sk3DNodeFist setScnScene:shipScene];

Finally, add the SK3DNode object as a child node to the SKScene object.

[self addChild:sk3DNodeFist];

To make the final result a bit more appealing, set the scene's background color to green as shown below.

[self setBackgroundColor:[SKColor greenColor]];

This is what the complete didMoveToView: method should look like. Build and run the application to see the result.

-(void)didMoveToView:(SKView *)view {
    [self setBackgroundColor:[SKColor greenColor]];
    SCNScene *shipScene = [SCNScene sceneNamed:@"ship.dae"];
    SK3DNode *sk3DNodeFist = [[SK3DNode alloc] initWithViewportSize:CGSizeMake(300, 300)];
    [sk3DNodeFist setPosition:CGPointMake(200,300)];
    [sk3DNodeFist setScnScene:shipScene];
    [self addChild:sk3DNodeFist];
}

Step 3: Creating a Custom Scene

Let's create a more complex scene that contains several SCNNode objects. For this second scene, we need to create another SK3DNode object.

SK3DNode *sk3DNode = [[SK3DNode alloc] initWithViewportSize:CGSizeMake(400, 400)];  
[sk3DNode setPosition:CGPointMake(150,200)]; 

Next, we create the SCNScene object, the one that will contain the scene child nodes.

SCNScene *sceneObject = [SCNScene scene];

This sceneObject will have three nodes:

  • Camera: This node is used to view the scene through a given position.
  • Light: This node enables you to see different material properties of the 3D object. You normally  define the light type and color.
  • 3D object: This is the imported or defined object within your code. By default, SceneKit enables you to define several parametric 3D objects, that is, torus, box, pyramid, sphere, cylinder, cone, tube, capsule, floor, 3D Text, or a custom shape.

For each individual node, you always perform three actions. Let's take the camera node as an example.

  1. Create an SCNCamera object and define its properties.
  2. Create an SCNNode to which the SCNCamera will be assigned.
  3. Add the SCNNode as a child node to the SCNScene object.

Let's now create the three nodes I mentioned earlier. This is what we need to implement to create the camera node.

SCNCamera *camera = [SCNCamera camera];
SCNNode *cameraNode = [SCNNode node];
[cameraNode setCamera:camera];
[cameraNode setPosition:SCNVector3Make(0, 0, 40)];
[sceneObject.rootNode addChildNode:cameraNode];

By default, the camera location and 3D scene are located at the origin, (0,0,0). Using the position property, you can adjust the camera along the three x, y, and z axes to change its position.

The light node requires a little bit more work, but the following code snippet should be easy to understand.

SCNLight *spotLight = [SCNLight light];
[spotLight setType:SCNLightTypeDirectional];
[spotLight setColor:[SKColor redColor]];

SCNNode *spotLightNode = [SCNNode node];
[spotLightNode setLight:spotLight];
[spotLightNode setPosition:SCNVector3Make(0, 0, 5)];
[cameraNode addChildNode:spotLightNode];
[sceneObject.rootNode addChildNode:spotLightNode];

We'll also create a torus object as shown in the following code snippet.

SCNTorus *torus= [SCNTorus torusWithRingRadius:13 pipeRadius:1.5];
SCNNode *torusNode = [SCNNode nodeWithGeometry:torus];
[torusNode setTransform:SCNMatrix4MakeRotation(M_PI / 3, 0, 1, 0)];
[sceneObject.rootNode addChildNode:torusNode];

Finally, we set the scene that we want to render and add the sk3DNode as a child node of the SKScene instance.

[sk3DNode setScnScene:sceneObject];
[self addChild:sk3DNode];

This is what the final didMoveToView: method should look like.

-(void)didMoveToView:(SKView *)view {
    [self setBackgroundColor:[SKColor greenColor]];

    SCNScene *shipScene = [SCNScene sceneNamed:@"ship.dae"];
    
    SK3DNode *sk3DNodeFist = [[SK3DNode alloc] initWithViewportSize:CGSizeMake(300, 300)];
    [sk3DNodeFist setPosition:CGPointMake(200,300)];
    [sk3DNodeFist setScnScene:shipScene];
    [self addChild:sk3DNodeFist];
    
    SK3DNode *sk3DNode = [[SK3DNode alloc] initWithViewportSize:CGSizeMake(400, 400)];
    [sk3DNode setPosition:CGPointMake(150,200)];
    
    SCNScene *sceneObject = [SCNScene scene];
    
    SCNCamera *camera = [SCNCamera camera];
    SCNNode *cameraNode = [SCNNode node];
    [cameraNode setCamera:camera];
    [cameraNode setPosition:SCNVector3Make(0, 0, 40)];
    [sceneObject.rootNode addChildNode:cameraNode];
    
    SCNLight *spotLight = [SCNLight light];
    [spotLight setType:SCNLightTypeDirectional];
    [spotLight setColor:[SKColor redColor]];
    
    SCNNode *spotLightNode = [SCNNode node];
    [spotLightNode setLight:spotLight];
    [spotLightNode setPosition:SCNVector3Make(0, 0, 5)];
    [cameraNode addChildNode:spotLightNode];
    [sceneObject.rootNode addChildNode:spotLightNode];
    
    SCNTorus *torus= [SCNTorus torusWithRingRadius:13 pipeRadius:1.5];
    SCNNode *torusNode = [SCNNode nodeWithGeometry:torus];
    [torusNode setTransform:SCNMatrix4MakeRotation(M_PI / 3, 0, 1, 0)];
    [sceneObject.rootNode addChildNode:torusNode];
    
    [sk3DNode setScnScene:sceneObject];
    [self addChild:sk3DNode];
}

Build and run the application. You should see something similar to the following screenshot.

Final result

Step 4: Animating the Scene

You can animate the scene using the CABasicAnimation class. You just need to create an instance of CABasicAnimation by invoking animationWithKeyPath:. The animation we create in the following code snippet will loop indefinitely and has a duration of five seconds. Add the following code snippet to the didMoveToView: method.

CABasicAnimation *torusRotation = [CABasicAnimation animationWithKeyPath:@"rotation"];
torusRotation.byValue = [NSValue valueWithSCNVector4:SCNVector4Make(1, 1, 0, 4.0*M_PI)];
[torusRotation setTimingFunction:[CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear]];
[torusRotation setRepeatCount:INFINITY];
[torusRotation setDuration:5.0];

[torusNode addAnimation:torusRotation forKey:nil];

Build and run the application to test the animation.

3. More SpriteKit

If you want to learn more about SpriteKit, then I encourage you to read the following SpriteKit tutorials:

If you'd like to read more about the SpriteKit framework, then I recommend reading Apple's SpriteKit Programming Guide or browsing the framework reference.

Conclusion

This concludes the second tutorial of this two-part series on the new features of the SpriteKit framework introduced in iOS 8. In this part, you learned how to use physics simulation and integrate SceneKit. If you have any questions or comments, feel free to drop a line in the comments.

2014-12-24T15:10:38.000Z2014-12-24T15:10:38.000ZOrlando Pereira

iOS 8: What's New in SpriteKit, Part 2

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-22465

This tutorial gives an overview of the new features of the SpriteKit framework that were introduced in iOS 8. The new features are designed to make it easier to support advanced game effects and include support for custom OpenGL ES fragment shaders, lighting, shadows, advanced new physics effects and animations, and integration with SceneKit. In this tutorial, you'll learn how to implement these new features.

Series Format

This series is split up into two tutorials and covers the most important new features of the SpriteKit framework. In the first part, we take a look at shaders, lighting, and shadows. In the second part, I'll talk about physics and SceneKit integration.

While each part of this series stands on its own, I recommend following along step-by-step to properly understand the new features of the SpriteKit framework. After reading both parts, you'll be able to create both simple and more advanced games using the new features of the SpriteKit framework.

Download the Xcode project we created in the previous article from GitHub if you'd like to follow along.

1. Physics

In iOS 8, SpriteKit introduced new physics features, such as per-pixel physics, constraints, inverse kinematics, and physics fields.

Kinematics is the process of calculating the 3D position of the end of a linked structure, given the angles of all the joints. Inverse Kinematics (IK) does the opposite. Given the end point of the structure, what angles do the joints need to make in order to achieve that end point? The following image clarifies these concepts.

Inverse Kinematics

With SpriteKit, you use sprites to represent joints that use the parent-child relationship to create a joint hierarchy. For each relationship, you define the inverse kinematics constraints on each joint and control the minimum and maximum rotation angle between them. Note that each joint rotates around its anchor point.

Step 1: Inverse Kinematics (IK)

Open the PhysicsSceneEditor and add the croquette-o.png sprite to the yellow rectangle. Select the sprite and change Name in the SKNode Inspector to Root. Set the Physics Definition Body Type to None.

Add a second sprite, wood.png and change its Name to FirstNode. Change the Parent field to Root. Move FirstNode by putting it on the right of Root and resize it to create a rectangle as shown below. Set the Physics Definition Body Type to None.

The result should look similar to the following scene.

Note that, when you select a sprite node, a white circle appears at its center. That circle represents the sprite node's anchor point around which every rotation is performed.

Anchor point

Step 2: Add More Sprites

Follow the previous steps and add two more sprites.

First Sprite

  • Add another croquette-o.png sprite.
  • Change its Name SecondNode.
  • Change its Parent to FirstNode.
  • Position it on the right of FirstNode.
  • Change its Physics Definition Body Type to None.

Second Sprite

  • Add another wood.png sprite.
  • Change its Name field to ThirdNode.
  • Change its Parent to SecondNode.
  • Position it on the right of SecondNode.
  • Resize it to create a rectangle.
  • Change its Physics Definition Body Type to None.

The result should look similar to the following scene.

Resulting image

Step 3: Edit and Simulate

To test the joint connections, interactions, and constraints, you don't need to build and run your project. Xcode provides two modes, edit and simulate.

The simulate mode provides a real time testing environment while the edit mode is used to create and edit your scene. So far, we've been working in the edit mode. Note that any changes you make in simulate mode are not saved.

At the bottom of the scene editor, you can see which mode you are currently working in. If the bottom bar of the scene editor is white, then you're in edit mode. A blue background indicates you're in simulate mode. Click the label in the bottom bar to switch between the two modes.

Simulate mode
Edit mode

Change the mode to simulate and select the FirstNode, SecondNode, and ThirdNode sprites. You can select multiple sprites by pressing Command.

Next, press Shift-Control-Click and move the sprites around in the scene. The result is that the sprite nodes animate and rotate. However, the rotation is weird and needs to be corrected.

Step 4: IK Constraints

Switch back to edit mode and add a few constraints to the sprite nodes. Select each sprite node and change its properties as follows.

Select the Root and SecondNode sprite nodes and set the IK ConstraintsMax Angle to 0. Select the FirstNode and ThirdNode sprite nodes and set  Anchor PointX to 0 and IK Constraints Max Angle to 90.

By modifying these properties, the position and size of the sprite nodes will change. After adding the constraints, manually adjust their size and position, and switch to simulate mode to test the new constraints we added.

The below screenshot illustrates the correct constraints configuration.

Constraints in action

Step 5: Magnetic Field Node

Magnetic fields are also new in SpriteKit. Let's see how this works by adding a magnetic field to the physics scene. Open PhysicsScene.m and an instance variable named magneticFieldNode of type SKFieldNode.

@implementation PhysicsScene {
    SKFieldNode *magneticFieldNode;
}

In the didMoveToView: method, we first configure the scene by creating a SKPhysicsBody instance for the scene and adding a gravitational force. This means that any nodes in the scene will be pulled downwards.

SKPhysicsBody *physicsBody = [SKPhysicsBody bodyWithEdgeLoopFromRect:self.frame];
[self.physicsWorld setGravity:CGVectorMake(0, -9)];
[self setPhysicsBody:physicsBody];

To configure the magneticFieldNode object, you need to configure its physics body as well as its position and strength. Note that each SKFieldNode has its own properties. The following code snippet shows how to configure the magnetic field node. We add the new node as a child node to the scene.

magneticFieldNode = [SKFieldNode magneticField];
[magneticFieldNode setPhysicsBody:[SKPhysicsBody bodyWithCircleOfRadius:80]];
[magneticFieldNode setPosition:CGPointMake(100, 100)];
[magneticFieldNode setStrength:3];
[self addChild:magneticFieldNode];

Step 6: Interactions

To see the magnetic field in action, we need to add a few nodes with which the magnetic field note can interact. In the following code snippet, we create three hundred sprites. Note that each sprite node has its own physics body and we set the affectedByGravity property to YES.

for (int i = 0; i < 300; i++) {
    SKSpriteNode *node4 = [SKSpriteNode spriteNodeWithTexture:[SKTexture textureWithImageNamed:@"wood.png"] size:CGSizeMake(25, 25)];
    [node4 setPhysicsBody:[SKPhysicsBody bodyWithRectangleOfSize:CGSizeMake(25, 25)]];
    [node4 setPosition:CGPointMake(arc4random()%640, arc4random()%950)];
    [node4.physicsBody setDynamic:YES];
    [node4.physicsBody setAffectedByGravity:YES];
    [node4.physicsBody setAllowsRotation:true]; 
    [node4.physicsBody setMass:0.9];
    [self addChild:node4];
}

The completed didMoveToView: method should look as follows:

-(void)didMoveToView:(SKView *)view {
    SKPhysicsBody *physicsBody = [SKPhysicsBody bodyWithEdgeLoopFromRect:self.frame];
    [self.physicsWorld setGravity:CGVectorMake(0, -9)];
    [self setPhysicsBody:physicsBody];
    magneticFieldNode = [SKFieldNode magneticField];
    [magneticFieldNode setPhysicsBody:[SKPhysicsBody bodyWithCircleOfRadius:80]];
    [magneticFieldNode setPosition:CGPointMake(100, 100)];
    [magneticFieldNode setStrength:3];
    [self addChild:magneticFieldNode];
    for (int i = 0; i < 300; i++) {
        SKSpriteNode *node4 = [SKSpriteNode spriteNodeWithTexture:[SKTexture textureWithImageNamed:@"wood.png"] size:CGSizeMake(25, 25)];
        [node4 setPhysicsBody:[SKPhysicsBody bodyWithRectangleOfSize:CGSizeMake(25, 25)]];
        [node4 setPosition:CGPointMake(arc4random()%640, arc4random()%950)];
        [node4.physicsBody setDynamic:YES];
        [node4.physicsBody setAffectedByGravity:YES];
        [node4.physicsBody setAllowsRotation:true];
        [node4.physicsBody setMass:0.9];
        [self addChild:node4];
    }
}

Before we build and run the application, we override the touchesMoved:withEvent: method so you can move the magnetic field node by tapping your finger.

-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
    for (UITouch *touch in touches) {
        [magneticFieldNode setPosition:[touch locationInNode:self]];
    }
}

Build and run the application to test the magnetic field node's effect on the scene. For additional information about simulating physics using the SpriteKit framework, I recommend reading Apple's documentation about this topic.

2. SceneKit Integration

SceneKit is a high-level Objective-C framework for building applications and games that use 3D graphics. It supports the import, manipulation, and rendering of 3D assets. The rendering algorithm only requires the description of your scene's contents, animations, and actions you want it to perform.

Through SceneKit, you are now able to create and deliver 3D content using SpriteKit. SceneKit has a tree structure and can be used in two ways:

  • standalone SceneKit environment
  • integrated into SpriteKit

SceneKit has a tree hierarchy composition. In a standalone SceneKit environment, the base class for the tree structure is an SCNNode instance as shown in the diagram below. An SCNNode object by itself has no visible content when the scene containing it is rendered. It simply defines a position in space that represents the position, rotation, and scale of a node relative to its parent node.

SceneKit hierarchyt

When you integrate SceneKit into a SpriteKit-based app, you need to define a SK3DNode object as the root object for your scene. This means that the core SceneKit hierarchy changes to the following:

SceneKit  SpriteKit hierarchy

Note that not every child node in the above diagram is required. You only define and configure the nodes that fit your needs. For instance, you can add an SCNLight node to illuminate the scene even if you don't include an SCNCamera node in the scene.

Step 1: Live Preview of Models

SpriteKit and SceneKit support a number of file formats for importing models. You can preview these models in real-time in Xcode. Inside the Textures folder in your project(Resources > Textures), there's a file named ship.dae. When you select this file, you're presented with a new user interface as shown below.

Xcode live preview

On the left of the editor, you can see two groups:

  • Entities: This group contains information about the predefined animations, camera position, lights, and materials defined by the model file. The file we've opened only contains information about the model's geometry and its material.
  • Scene graph: This group contains information about the original object mesh. In this case, the object was created as a whole and you only see a single mesh.

Step 2: Importing an External Model

To use SceneKit in combination with SpriteKit, you need to import the SceneKit library from the SceneKit framework. Open SceneKitScene.m and include it as shown below.

#include <SceneKit/SceneKit.h>

We're going to use the model stored in ship.dae as the 3D scene. Inside the didMoveToView: method, create the SCNScene object that loads a scene from that file.

SCNScene *shipScene = [SCNScene sceneNamed:@"ship.dae"];

Remember the tree hierarchy I mentioned earlier? To add the shipScene object to the SKScene object, two steps are required:

  • create a SK3DNode object
  • define a SceneKit scene to render

In this case, the scene to render is the shipScene. Note that you also define the node's position and its size.

SK3DNode *sk3DNodeFist = [[SK3DNode alloc] initWithViewportSize:CGSizeMake(300, 300)];
[sk3DNodeFist setPosition:CGPointMake(200,300)];
[sk3DNodeFist setScnScene:shipScene];

Finally, add the SK3DNode object as a child node to the SKScene object.

[self addChild:sk3DNodeFist];

To make the final result a bit more appealing, set the scene's background color to green as shown below.

[self setBackgroundColor:[SKColor greenColor]];

This is what the complete didMoveToView: method should look like. Build and run the application to see the result.

-(void)didMoveToView:(SKView *)view {
    [self setBackgroundColor:[SKColor greenColor]];
    SCNScene *shipScene = [SCNScene sceneNamed:@"ship.dae"];
    SK3DNode *sk3DNodeFist = [[SK3DNode alloc] initWithViewportSize:CGSizeMake(300, 300)];
    [sk3DNodeFist setPosition:CGPointMake(200,300)];
    [sk3DNodeFist setScnScene:shipScene];
    [self addChild:sk3DNodeFist];
}

Step 3: Creating a Custom Scene

Let's create a more complex scene that contains several SCNNode objects. For this second scene, we need to create another SK3DNode object.

SK3DNode *sk3DNode = [[SK3DNode alloc] initWithViewportSize:CGSizeMake(400, 400)];  
[sk3DNode setPosition:CGPointMake(150,200)]; 

Next, we create the SCNScene object, the one that will contain the scene child nodes.

SCNScene *sceneObject = [SCNScene scene];

This sceneObject will have three nodes:

  • Camera: This node is used to view the scene through a given position.
  • Light: This node enables you to see different material properties of the 3D object. You normally  define the light type and color.
  • 3D object: This is the imported or defined object within your code. By default, SceneKit enables you to define several parametric 3D objects, that is, torus, box, pyramid, sphere, cylinder, cone, tube, capsule, floor, 3D Text, or a custom shape.

For each individual node, you always perform three actions. Let's take the camera node as an example.

  1. Create an SCNCamera object and define its properties.
  2. Create an SCNNode to which the SCNCamera will be assigned.
  3. Add the SCNNode as a child node to the SCNScene object.

Let's now create the three nodes I mentioned earlier. This is what we need to implement to create the camera node.

SCNCamera *camera = [SCNCamera camera];
SCNNode *cameraNode = [SCNNode node];
[cameraNode setCamera:camera];
[cameraNode setPosition:SCNVector3Make(0, 0, 40)];
[sceneObject.rootNode addChildNode:cameraNode];

By default, the camera location and 3D scene are located at the origin, (0,0,0). Using the position property, you can adjust the camera along the three x, y, and z axes to change its position.

The light node requires a little bit more work, but the following code snippet should be easy to understand.

SCNLight *spotLight = [SCNLight light];
[spotLight setType:SCNLightTypeDirectional];
[spotLight setColor:[SKColor redColor]];

SCNNode *spotLightNode = [SCNNode node];
[spotLightNode setLight:spotLight];
[spotLightNode setPosition:SCNVector3Make(0, 0, 5)];
[cameraNode addChildNode:spotLightNode];
[sceneObject.rootNode addChildNode:spotLightNode];

We'll also create a torus object as shown in the following code snippet.

SCNTorus *torus= [SCNTorus torusWithRingRadius:13 pipeRadius:1.5];
SCNNode *torusNode = [SCNNode nodeWithGeometry:torus];
[torusNode setTransform:SCNMatrix4MakeRotation(M_PI / 3, 0, 1, 0)];
[sceneObject.rootNode addChildNode:torusNode];

Finally, we set the scene that we want to render and add the sk3DNode as a child node of the SKScene instance.

[sk3DNode setScnScene:sceneObject];
[self addChild:sk3DNode];

This is what the final didMoveToView: method should look like.

-(void)didMoveToView:(SKView *)view {
    [self setBackgroundColor:[SKColor greenColor]];

    SCNScene *shipScene = [SCNScene sceneNamed:@"ship.dae"];
    
    SK3DNode *sk3DNodeFist = [[SK3DNode alloc] initWithViewportSize:CGSizeMake(300, 300)];
    [sk3DNodeFist setPosition:CGPointMake(200,300)];
    [sk3DNodeFist setScnScene:shipScene];
    [self addChild:sk3DNodeFist];
    
    SK3DNode *sk3DNode = [[SK3DNode alloc] initWithViewportSize:CGSizeMake(400, 400)];
    [sk3DNode setPosition:CGPointMake(150,200)];
    
    SCNScene *sceneObject = [SCNScene scene];
    
    SCNCamera *camera = [SCNCamera camera];
    SCNNode *cameraNode = [SCNNode node];
    [cameraNode setCamera:camera];
    [cameraNode setPosition:SCNVector3Make(0, 0, 40)];
    [sceneObject.rootNode addChildNode:cameraNode];
    
    SCNLight *spotLight = [SCNLight light];
    [spotLight setType:SCNLightTypeDirectional];
    [spotLight setColor:[SKColor redColor]];
    
    SCNNode *spotLightNode = [SCNNode node];
    [spotLightNode setLight:spotLight];
    [spotLightNode setPosition:SCNVector3Make(0, 0, 5)];
    [cameraNode addChildNode:spotLightNode];
    [sceneObject.rootNode addChildNode:spotLightNode];
    
    SCNTorus *torus= [SCNTorus torusWithRingRadius:13 pipeRadius:1.5];
    SCNNode *torusNode = [SCNNode nodeWithGeometry:torus];
    [torusNode setTransform:SCNMatrix4MakeRotation(M_PI / 3, 0, 1, 0)];
    [sceneObject.rootNode addChildNode:torusNode];
    
    [sk3DNode setScnScene:sceneObject];
    [self addChild:sk3DNode];
}

Build and run the application. You should see something similar to the following screenshot.

Final result

Step 4: Animating the Scene

You can animate the scene using the CABasicAnimation class. You just need to create an instance of CABasicAnimation by invoking animationWithKeyPath:. The animation we create in the following code snippet will loop indefinitely and has a duration of five seconds. Add the following code snippet to the didMoveToView: method.

CABasicAnimation *torusRotation = [CABasicAnimation animationWithKeyPath:@"rotation"];
torusRotation.byValue = [NSValue valueWithSCNVector4:SCNVector4Make(1, 1, 0, 4.0*M_PI)];
[torusRotation setTimingFunction:[CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear]];
[torusRotation setRepeatCount:INFINITY];
[torusRotation setDuration:5.0];

[torusNode addAnimation:torusRotation forKey:nil];

Build and run the application to test the animation.

3. More SpriteKit

If you want to learn more about SpriteKit, then I encourage you to read the following SpriteKit tutorials:

If you'd like to read more about the SpriteKit framework, then I recommend reading Apple's SpriteKit Programming Guide or browsing the framework reference.

Conclusion

This concludes the second tutorial of this two-part series on the new features of the SpriteKit framework introduced in iOS 8. In this part, you learned how to use physics simulation and integrate SceneKit. If you have any questions or comments, feel free to drop a line in the comments.

2014-12-24T15:10:38.000Z2014-12-24T15:10:38.000ZOrlando Pereira

iOS 2014: A Year in Review

$
0
0

This time last year, I wrote that 2013 was the most significant year since the introduction of the iPhone. Looking back at 2014, it's clear that I'm going to need to reiterate those words. Apple blew us away with announcements, new technologies, and promises. Let's take a few minutes to look back at 2014.

iOS 8

Many of us thought the introduction and release of iOS 8 was going to be the highlight of 2014 for iOS developers, but that wasn't entirely true as we'll see in a few moments.

While iOS 7 was predominantly focused on the user interface and user experience of the operating system, with iOS 8 Apple shifted focus to the inner workings of the operating system by adding extensions, introducing CloutKit and HealthKit, integrating TestFlight, etc. Amidst the flood of announcements and new information, two patterns stood out.

First, iOS 8 continues where iOS 7 left off. Apple continues to improve and polish the operating system, aiming to provide a more consistent and reliable user experience. With the introduction of extensions, iOS 8 opens up a wide range of opportunities and possibilities for developers, that is, if Apple allows them to.

Second, with the release of OS X Yosemite, the integration between iOS and OS X has become tighter, unlocking a new category of possibilities to innovate and improve the user experience. Apple has named this tight integration continuity and has shown us what is possible by adopting this new technology in some of its own applications.

Yosemite

The announcement of OS X Yosemite at WWDC 2014 didn't come as a surprise. Last year, Apple committed to an annual release schedule for OS X, starting with OS X Mavericks, and every developer at WWDC 2014 was expecting the next iteration of the operating system.

One of the key features of Yosemite is its redesigned user interface. My favorite feature, however, is continuity, making it possible for your Mac and iOS devices to do some really cool things.

If you have a Mac running Yosemite and an iOS device running iOS 8, then you can send and receive text messages on your Mac, start an email on your iPhone and finish it on your Mac, use your iPhone as a hotspot with a single click, etc. Continuity works pretty good if you ask me.

It's now also possible to use AirDrop to send files from your Mac to an iOS device. Family sharing has received a significant update and Apple also introduced iCloud Drive, a direct competitor to Dropbox.

Messages also received an update. It's now possible to send small audio messages to people, leave conversation you're no longer interested in, and make calls with your iPhone while on your Mac. It's amazing to see how Apple continues to improve an already fantastic operating system.

Swift

From a developer's perspective, one of the most important announcements and biggest surprises of 2014 was the introduction and release of Swift, a brand new programming language that will power the next generation of iOS and OS X applications.

We all know that Apple is great at keeping secrets, but the company did a fantastic job keeping Swift a secret. The language was introduced at WWDC 2014 and made every developer feel Christmas had come early this year.

Swift is a modern programming language with an easy to understand syntax that is incredibly expressive. It takes the best parts of Objective-C, including its runtime, and combines this with modern technologies. Even though Swift can be combined with Objective-C, Swift is not tied to C like Objective-C is.

If you'd like to learn more about Swift, then I encourage you to read the series on Swift at Tuts+. It's still early days for Swift, but there's really no reason to not get your feet wet with this new language.

WatchKit

As if Swift, iOS 8, and OS X Yosemite weren't enough already, Apple announced Apple Watch in September. Apple showed us what the company's first generation of wearables will look like and it also provided developers with WatchKit, a framework for creating applications for Apple Watch.

The first generation of Apple Watch applications will be extensions of existing iOS applications running on your iPhone, but Apple also announced it will open Apple Watch up to native third party applications in 2015. If you're an iOS developer, then you don't want to miss out on this new breed of applications.

Xcode 6

With the announcements of iOS 8, OS X Yosemite, and Swift, Xcode received little attention at this year's WWDC. But it's important to remember that Xcode is the tool most iOS and OS X developers use day in day out to create the applications you and I use every day.

Xcode continues to play a key role in every developer's workflow and Apple's IDE is more powerful than ever with support for Swift, playgrounds, adaptive layouts, view debugging, and an improved testing architecture.

Interface Builder, in particular, receive a major update with the ability to debug views, preview user interfaces, and support for adaptive layouts.

Product Line

iPhone 6 & 6 Plus

After years of rumors, Apple finally revealed a larger iPhone—two actually. The company released iPhone 6 with a 4.7" display and iPhone 6 Plus with a 5.5" display. The iPhone's looks also changed dramatically with a thinner design, a higher resolution, display, and an edge-to-edge glass front. They truly look stunning.

Both models are powered by the new A8 chipset and battery life has improved slightly for the largest model, iPhone 6 Plus. As with every major release, the camera received a significant update, delivering even better pictures as well as better software and new APIs to take advantage of the camera's new capabilities.

The new models also incorporate NFC (Near Field Communication) on which Apple Pay is built. I haven't had the chance to try out Apple Pay, but it seems to be a pretty solid technology that works both offline and online.

iPad Air & iPad Mini

The iPad Mini and iPad Air received updates too, but the changes weren't groundbreaking. The iPad Air is thinner, includes Touch ID, and ships with the brand new A8X chipset.

The only noteworthy upgrade the iPad Mini received was the addition of Touch ID. If you own an iPad Mini 2, then there's no need to upgrade your device unless Touch ID is a must-have feature for you.

iMac 5K

While we're still waiting for Apple to build a retina Thunderbolt display, in the meantime the company released the iMac 5K, an iMac with a retina display. I haven't had a chance to see an iMac 5K in real life, but it's supposed to be amazing. How can 14.7 million pixels not be amazing?

Apple Watch

Apple revealed Apple Watch in September and the general consensus has been positive to very positive. The watch looks nice and seems to be a more than viable player in the growing market of wearables.

Apple's first wearable is supposed to be available for sale in the first half of 2015, but it's still unclear how much your Apple Watch will cost. There are three collections, Apple Watch, Apple Watch Sport, and Apple Watch Edition. The straps are interchangeable and the materials from which Apple Watch is made are up to the user's choice. This makes it difficult to put a price tag on the watch that you'd like to buy.

Apple made it clear that the watch will support third party applications. The first generation of applications, however, will be extensions of existing iOS applications, running on a paired iPhone. The second generation will be more powerful and run natively on Apple Watch. We'll have to wait for 2015 to better understand how this will work and pan out.

What About Android?

Anyone saying Android is inferior to iOS should take a second look at what Google accomplished in the mobile space this year. Android Lollipop is another milestone for Android and arguably the most important release to date. It is packed with features developers can take advantage of, but that is only part of the story.

Google also introduced Material Design and Polymer at this year's Google I/O. Material Design is a visual or design language that builds upon Google's experiments Google Now. Android Lollipop leverages Material Design, but it is clear that Google aims to use Material Design in its other products as well.

Polymer was also revealed to the public during this year's Google I/O. To quote the Polymer website it's a "pioneering library that makes it faster than ever before to build beautiful applications on the web." Polymer clearly illustrates that Google has a wider vision and sees the web as a first class citizen of the mobile space—unlike Apple.

Conclusion

I'm sure you agree that 2014 was a busy year for everyone involved in technology or mobile development. No matter what platform you use or develop for, the future of the mobile space looks bright. What was the most important announcement for you in 2014? Let me know in the comments below.

2014-12-26T19:15:13.000Z2014-12-26T19:15:13.000ZBart Jacobs

iOS 2014: A Year in Review

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-22952

This time last year, I wrote that 2013 was the most significant year since the introduction of the iPhone. Looking back at 2014, it's clear that I'm going to need to reiterate those words. Apple blew us away with announcements, new technologies, and promises. Let's take a few minutes to look back at 2014.

iOS 8

Many of us thought the introduction and release of iOS 8 was going to be the highlight of 2014 for iOS developers, but that wasn't entirely true as we'll see in a few moments.

While iOS 7 was predominantly focused on the user interface and user experience of the operating system, with iOS 8 Apple shifted focus to the inner workings of the operating system by adding extensions, introducing CloutKit and HealthKit, integrating TestFlight, etc. Amidst the flood of announcements and new information, two patterns stood out.

First, iOS 8 continues where iOS 7 left off. Apple continues to improve and polish the operating system, aiming to provide a more consistent and reliable user experience. With the introduction of extensions, iOS 8 opens up a wide range of opportunities and possibilities for developers, that is, if Apple allows them to.

Second, with the release of OS X Yosemite, the integration between iOS and OS X has become tighter, unlocking a new category of possibilities to innovate and improve the user experience. Apple has named this tight integration continuity and has shown us what is possible by adopting this new technology in some of its own applications.

Yosemite

The announcement of OS X Yosemite at WWDC 2014 didn't come as a surprise. Last year, Apple committed to an annual release schedule for OS X, starting with OS X Mavericks, and every developer at WWDC 2014 was expecting the next iteration of the operating system.

One of the key features of Yosemite is its redesigned user interface. My favorite feature, however, is continuity, making it possible for your Mac and iOS devices to do some really cool things.

If you have a Mac running Yosemite and an iOS device running iOS 8, then you can send and receive text messages on your Mac, start an email on your iPhone and finish it on your Mac, use your iPhone as a hotspot with a single click, etc. Continuity works pretty good if you ask me.

It's now also possible to use AirDrop to send files from your Mac to an iOS device. Family sharing has received a significant update and Apple also introduced iCloud Drive, a direct competitor to Dropbox.

Messages also received an update. It's now possible to send small audio messages to people, leave conversation you're no longer interested in, and make calls with your iPhone while on your Mac. It's amazing to see how Apple continues to improve an already fantastic operating system.

Swift

From a developer's perspective, one of the most important announcements and biggest surprises of 2014 was the introduction and release of Swift, a brand new programming language that will power the next generation of iOS and OS X applications.

We all know that Apple is great at keeping secrets, but the company did a fantastic job keeping Swift a secret. The language was introduced at WWDC 2014 and made every developer feel Christmas had come early this year.

Swift is a modern programming language with an easy to understand syntax that is incredibly expressive. It takes the best parts of Objective-C, including its runtime, and combines this with modern technologies. Even though Swift can be combined with Objective-C, Swift is not tied to C like Objective-C is.

If you'd like to learn more about Swift, then I encourage you to read the series on Swift at Tuts+. It's still early days for Swift, but there's really no reason to not get your feet wet with this new language.

WatchKit

As if Swift, iOS 8, and OS X Yosemite weren't enough already, Apple announced Apple Watch in September. Apple showed us what the company's first generation of wearables will look like and it also provided developers with WatchKit, a framework for creating applications for Apple Watch.

The first generation of Apple Watch applications will be extensions of existing iOS applications running on your iPhone, but Apple also announced it will open Apple Watch up to native third party applications in 2015. If you're an iOS developer, then you don't want to miss out on this new breed of applications.

Xcode 6

With the announcements of iOS 8, OS X Yosemite, and Swift, Xcode received little attention at this year's WWDC. But it's important to remember that Xcode is the tool most iOS and OS X developers use day in day out to create the applications you and I use every day.

Xcode continues to play a key role in every developer's workflow and Apple's IDE is more powerful than ever with support for Swift, playgrounds, adaptive layouts, view debugging, and an improved testing architecture.

Interface Builder, in particular, receive a major update with the ability to debug views, preview user interfaces, and support for adaptive layouts.

Product Line

iPhone 6 & 6 Plus

After years of rumors, Apple finally revealed a larger iPhone—two actually. The company released iPhone 6 with a 4.7" display and iPhone 6 Plus with a 5.5" display. The iPhone's looks also changed dramatically with a thinner design, a higher resolution, display, and an edge-to-edge glass front. They truly look stunning.

Both models are powered by the new A8 chipset and battery life has improved slightly for the largest model, iPhone 6 Plus. As with every major release, the camera received a significant update, delivering even better pictures as well as better software and new APIs to take advantage of the camera's new capabilities.

The new models also incorporate NFC (Near Field Communication) on which Apple Pay is built. I haven't had the chance to try out Apple Pay, but it seems to be a pretty solid technology that works both offline and online.

iPad Air & iPad Mini

The iPad Mini and iPad Air received updates too, but the changes weren't groundbreaking. The iPad Air is thinner, includes Touch ID, and ships with the brand new A8X chipset.

The only noteworthy upgrade the iPad Mini received was the addition of Touch ID. If you own an iPad Mini 2, then there's no need to upgrade your device unless Touch ID is a must-have feature for you.

iMac 5K

While we're still waiting for Apple to build a retina Thunderbolt display, in the meantime the company released the iMac 5K, an iMac with a retina display. I haven't had a chance to see an iMac 5K in real life, but it's supposed to be amazing. How can 14.7 million pixels not be amazing?

Apple Watch

Apple revealed Apple Watch in September and the general consensus has been positive to very positive. The watch looks nice and seems to be a more than viable player in the growing market of wearables.

Apple's first wearable is supposed to be available for sale in the first half of 2015, but it's still unclear how much your Apple Watch will cost. There are three collections, Apple Watch, Apple Watch Sport, and Apple Watch Edition. The straps are interchangeable and the materials from which Apple Watch is made are up to the user's choice. This makes it difficult to put a price tag on the watch that you'd like to buy.

Apple made it clear that the watch will support third party applications. The first generation of applications, however, will be extensions of existing iOS applications, running on a paired iPhone. The second generation will be more powerful and run natively on Apple Watch. We'll have to wait for 2015 to better understand how this will work and pan out.

What About Android?

Anyone saying Android is inferior to iOS should take a second look at what Google accomplished in the mobile space this year. Android Lollipop is another milestone for Android and arguably the most important release to date. It is packed with features developers can take advantage of, but that is only part of the story.

Google also introduced Material Design and Polymer at this year's Google I/O. Material Design is a visual or design language that builds upon Google's experiments Google Now. Android Lollipop leverages Material Design, but it is clear that Google aims to use Material Design in its other products as well.

Polymer was also revealed to the public during this year's Google I/O. To quote the Polymer website it's a "pioneering library that makes it faster than ever before to build beautiful applications on the web." Polymer clearly illustrates that Google has a wider vision and sees the web as a first class citizen of the mobile space—unlike Apple.

Conclusion

I'm sure you agree that 2014 was a busy year for everyone involved in technology or mobile development. No matter what platform you use or develop for, the future of the mobile space looks bright. What was the most important announcement for you in 2014? Let me know in the comments below.

2014-12-26T19:15:13.000Z2014-12-26T19:15:13.000ZBart Jacobs

What's New in Android Lollipop

$
0
0

After months of speculation, hype and teasing, Google officially released Android 5.0 to the world on 12 November 2014. The SDK was made available on 3 November. We already knew a lot about the features, due to the L preview SDK that was released on 25 June 2014. What we didn't know was what the L would stand for.

Lollipop was the name given to the 5.0 update. Looking back, Lollipop was a clear favorite. But, if given enough time to speculate, the obvious once again holds the power to surprise.

Lollipop is a significant update for the Android platform. It's arguably the biggest release to date, and certainly the most ambitious.

1. Features

Now that the Android SDK is out, here's a roundup of some of the new features in Android 5.0.

Battery

Project Volta   

In recent years, Google has focused with every Android release on a particular development aspect of the operating system and how it can be improved. For 5.0, it was improving battery life.

The JobScheduler API allows you to run jobs asynchronously at a later time or under particular conditions.JobInfo objects can be used to define the conditions a scheduled job will run under.

Thanks to these additions, developers now have a lot more control over when and how battery-draining tasks are performed. 

Developer Tools

There is a new ADB command dumpsys batterystats that can be used to generate statistical data about battery usage on a device. Take a look at the following command to see how this works.

adb shell dumpsys batterystats --charged <package-name>

Notifications

In Lollipop, notifications can be displayed on the lock screen. Developers can specify the amount of information displayed within a notification via setVisibility, which accepts the following values:

  • VISIBILITY_PRIVATE: shows basic information, such as the notification's icon, but hides the notification's content
  • VISIBILITY_PUBLIC: shows the notification's content
  • VISIBILITY_SECRET: shows nothing, excluding even the notification's icon

Metadata can now be added to notifications to allow for categories and priority, and to collect additional contacts.

Key notifications, such as incoming calls, will appear in a heads-up notification window, which will float at the top of the current app until the user acknowledges or dismisses the notification.

Multitasking

The recents screen has been renamed to overview. With the new name come new APIs that improve multitasking options on Android. You can now have your activities treated as tasks and be shown in their own window in the overview screen.

For example, a web browser app could be set so that each tab has its own window. In the previous recents screen, a single browser app would have been displayed.

If you have a website, you can add <meta name="theme-color" content="#3F51B5"> to your header section to have overview display the given color as the header for your website.

Runtime and ART

Previous versions of Android have all used Dalvik as the process virtual machine. Applications are commonly written in Java, which is then compiled to bytecode. This is then translated to Dalvik bytecode and stored in .dex and .odex files, for Dalvik to then process. 

This is a very a basic explanation of what the runtime is doing and hopefully conveys its importance. Applications run on the process virtual machine so its performance determines the overall performance of the app and is a bottleneck.

Dalvik uses JIT (Just In Time) compilation, meaning that it only runs the application at the moment it is needed.

ART, on the other hand, uses an AOT (Ahead Of Time) compilation to compile the bytecode. When an application is installed, it's compiled by ART's dex2oatutility, which creates ELF executables instead of .odex files. From then on, the application is executed from the already compiled ELF executable.

That's a lot of saved compiling at the expense of longer application install times and some extra disk space.

With the addition of improved garbage collection (GC), ART outperforms Dalvik in nearly every way, making for a sharper and more fluid Android experience.

Android TV

To help bring your app to large screen displays, Lollipop introduces the Leanback UI and the Android TV Input Framework (TIF). The Leanback library provides user interface widgets for TV apps. TIF is designed to allow TV apps to handle video streams from sources such as HDMI inputs, TV tuners, and IPTV receivers.

Graphics

Khronos OpenGL ES 3.1 has been added. Key features include:

  • compute shaders
  • separate shader objects
  • shading language improvements
  • extensions for advanced blend modes and debugging
  • indirect draw commands
  • multisample and stencil textures

Android 5.0 remains backwards compatible with OpenGL ES 2.0 and 3.0.

Android Extension Pack (AEP)

To supplement OpenGL ES 3.1, a set of OpenGL ES extensions have been added that allow for the following:

  • guaranteed fragment shader support for shader storage buffers, images, and atomics (fragment shader support is optional in OpenGL ES 3.1)
  • different blend modes for each color attachment in a frame buffer
  • tessellation and geometry shaders
  • ASTC (LDR) texture compression format
  • per-sample interpolation and shading

Chrome View

Android Lollipop includes a new version of Chromium for Web View, based on the Chromium m37 release that adds support for WebAudio, WebRTC, and WebGL.

Native support for Web Components is also included in the update and will allow for the use of Polymer and its Material Design elements without requiring polyfills.

As of Android 5.0, Chromium is now updatable from the Play Store so new APIs and bug fixes will be available immediately and will no longer require an update of the Android operating system.

Media Browsing    

The new android.media.browse API allows apps to browse the media content library of other apps. The MediaBrowserService class is used to expose media in an application, while the MediaBrowser class is used to interact with a media browser service.

Media Playback Control

Two new classes have been introduced to make playback control simpler to manage across different UIs and services.

MediaSession replaces RemoteControlClient. It provides a set of callback methods for use in transport controls and media buttons. MediaController can be used to create a custom media controller app, which can then be used to send commands to a MediaSession.

New Sensors

Two new sensors have been introduced:

  • Tilt Detector: improves activity recognition
  • Heart Rate Sensor: capable of reporting the heart rate of the user touching the device

Of course, both of these sensors require supported hardware.

Managed Provisioning

Device administrators can use a managed provisioning service to add apps to a separate managed profile. If there's an existing personal account on a device that has been provisioned, the managed profile apps will appear alongside the existing applications.

Device Owner

A device owner is a specialised type of device administrator that can create and remove secondary users and configure global settings, essentially giving Android a traditional administrator and user account system.

Screen Pinning

Screen pinning is a new feature that is comparable to kiosk mode on iOS. Screen pinning includes the following features:

  • The status bar is blank.
  • Other apps cannot launch new activities.
  • User notifications and status information are hidden.
  • The current app can create new activities as long as no new tasks are created.

Screen pinning can be activated manually via Settings> Security > Screen Pinning. It can also be activated programmatically. The startLockTask method can be called from your app to activate screen pinning. If the app is not from a device owner, a confirmation prompt will be shown. The setLockTaskPackages method can be called by an owner app and will avoid the confirmation prompt.

To deactivate screen pinning, you need to call stopLockTask if it was initiated by a device owner app. If it was activated by a non-device owner, the user can exit screen pinning mode by holding both the back and recents buttons.

Screen Sharing

Screen capturing is now possible through the newandroid.media.projection APIs. The create VirtualDisplay method allows the calling app to capture the screen into a surface object, which can then be sent across the network. The API can only capture non-secure content and does not include audio.

Camera

RAW image capturing has finally arrived on Android, thanks to the new android.hardware.camera2 API.

Bluetooth Low Energy

Android devices can now act asBluetooth LE peripherals. Apps can make use of this to make their presence known to nearby devices. With the new android.bluetooth.le APIs, you can enable your apps to connect to nearby Bluetooth devices, broadcast advertisements, and scan for responses. These new features also come with a new manifest permission, BLUETOOTH_ADMIN.

These APIs will be extremely useful when working with wearable devices, health and fitness apps, and monitoring apps. All of these are predicted growth areas for Android in the near future.

NFC

NFC has been improved in the several ways:

  • Android Beam is now an option in the share menu.
  • invokeBeam can be used to initiate the sharing of data. You no longer have to physically bump devices.
  • registerAidsForService and setPreferredService have been added to aid the development of payment apps.

Multiple Network Connections

New APIs allow for apps to query networks for available features, such as whether the network is cellular, metered or Wi-Fi.

Printing Framework

Bitmap images can now be rendered from PDF document pages, using the new PdfRendered class.

Input Method Editors (IME)

You can now cycle through different IMEs available to the platform. This is accomplished by using the shouldOffetToNextInputMethod method.

2. Material Design

One of the biggest features of Android 5.0 is Material Design. Material Design is a set of guidelines relating to visual design, content motion, and user interaction. The guidelines are intended to go beyond Android and are designed for a wide array of devices and platforms.

Polymer is a notable example of the cross-platform nature of Material Design, with Google creating several Material Design web elements to aid in construction of websites/web apps with a Material Design theme. Despite its cross-platform nature, Material Design still remains a focal point of Android 5.0.

New Widgets

Lollipop introduced two new widgets:

  • CardView: This widget allows for information to be grouped together in a consistent manner. The card itself can have its depth altered to promote or highlight it as needed.
  • RecyclerView: This is a more advanced version of theListView widget.

New Themes    

There are two new themes that make use of Material Design principles, Dark Material and Light Material. Both apply new user interface system widgets. System widgets are easy to customize and you can set their color palette. Several animations and transitions are also defaults of these themes, such as the ripple effect.

Depth and Shadow

Depth can now be altered on Android views through the new Z property. Higher Z values cast larger shadows around the view, giving the appearance of increased elevation. This is a staple of the Material Design ethos where the goal is to create a textile appearance through the use of layers.

Animations

Another staple of Material Design is animation. Touch feedback animations and a host of activity transitions all aid in creating a tactile and immersive experience. The goal is not to have information pop out or disappear. Every view/object should appear as a layer on a surface.

Imagine a nice, clean, white desk. On this desk you have various papers, post-it notes and stationery. When you look down at the desk, it’s not a flat view. The desk contains several layers, and objects have different depths and cast shadows on the layer beneath.

If you need to see a page underneath another page, you must move the covering page out of the way. If you want to place your laptop on the desk, you need to slide the existing papers out of the way to make space. When you touch something on your desk, it moves, bends, vibrates, and shuffles.

3. Using Android 5.0

To get started with Android 5.0, download the SDK platform for v21 in your preferred IDE. This will most likely be done through the SDK manager in Eclipse or Android Studio.

In the ApplicationManifest.xml file and/or build.gradle file, set the targetSdkVersion to 21.

Important changes and considerations:

There's a saying in the superhero world, "With great power, comes great responsibility." There is a similar one in the development community, "With large updates, comes extensive testing."

Here's a quick checklist, if you already have an Android app:

  • Does my app run issue-free on ART?
  • If my app uses notifications, how will they be integrated into the lock screen?
  • Can the user interface benefit from a refresh? Is Material Design a good fit and how much work will it involve?
  • The RemoteControlClient class is now deprecated so should I move over to the MediaSession API?
  • WebView now blocks mixed content and third party cookies by default. Do I need to use setMixedContentMode and setAcceptThirdPartyCookies?

A complete list list can be found on the Android Developer website

4. Backwards Compatibility

One of the biggest changes in Android 5.0 is the user interface with the introduction of Material Design. Making use of Material Design and putting best design practices to use, takes a lot of consideration and work on the part of the developer.

For existing apps, developers are faced with further challenges, such as how to leverage the new features of 5.0 whilst maintaining backwards compatibility, providing a consistent user experience across different API levels.

To show how to use Android 5.0 and Material Design in your project, I've created a simple app. It consists of a single activity that displays several widgets. I have then added the following to the res/ folder:

  • menu-v21/: This contains a copy of the menu_main.xml and will be used to display Material Design icons on Android 5.0 devices.
  • values-v11/: This contains a styles.xml file that sets the base theme to holo.light for all devices running Android 3.0 or above. Appearance changes to the action bar have also been made in this file.
  • values-v21/: This contains a styles.xml file that sets the base theme to material.lightfor devices running on Android 5.0 and aboveIt also defines the base colors.

The below image shows the app running on a 4.4.2 device and a 5.0 device. The Material theme has been applied for 5.0+ devices. Other devices will receive the holo.light theme. It shows the default state of both themes and the user interface differences between them.

Color and Action Bar 

With Material Design, defining your app's base colors to fit in with your brand has never been easier. For example, adding the below code to your theme will set the notificationbar background, the action bar background, and user interface widgets.

<!-- Base application theme. --><style name="AppTheme" parent="android:Theme.Material.Light"><!-- Customize your theme here. --><!-- Main theme colors --><!--   your app branding color for the app bar --><item name="android:colorPrimary">#0d7963</item><!--   darker variant for the status bar and contextual app bars --><item name="android:colorPrimaryDark">#ff0d5b47</item><!--   theme UI controls like checkboxes and text fields --><item name="android:colorAccent">#0d7963</item></style>

The results can be very striking and the app can become identifiable with just a glance. There's also a new set of Material Design icons, which are another quick and easy way to bring a modern user interface feel to any existing app.

Here's an example of the difference made by using Material Design icons and defining the main theme colors:

The use of the action bar and color is a dominating feature of Material Design and can effectively brand and distinguish your app. One way to provide a consistent user experience across different API levels is to replicate these features over to styles and themes intended for different API levels.

For example, if we compare the application running on a 4.4.2 device to a 5.0 device:

As you can see, they have a very distinctive look to them. To improve this, we can use the same Material Design icons on API levels lower than 5.0. We can also style the action bar so that it resembles the Material Design version.

For the icons, we can change the images in res/menu/menu-main.xml to Material Design icons. For the action bar, we can edit the res/values-v11/styles.xml file to look like the following:

<resources><!-- Base application theme. --><style name="AppTheme" parent="android:Theme.Holo.Light"><!-- Customize your theme here. --><item name="android:actionBarStyle">@style/MyActionBar</item></style><style name="MyActionBar" parent="@android:style/Widget.Holo.Light.ActionBar"><item name="android:background">#0d7963</item></style></resources>

Here's another look at the two compared, after the changes:

The version running on the 4.4.2 device becomes more recognizable as our application and our brand. Without any significant changes, the app already looks more consistent across the different APIs and has a more modern feel to it.

Using Non-Supported Features

Certain features are exclusive to Android Lollipop, most notably the activity transitions and the reveal animations. This does not necessarily mean that you have to forgo using them or create a separate app that makes use of them. You can check the system version at runtime and only perform certain API calls if the app is running on an appropriate version of Android.

An example to check if the system is 5.0+:

// Check if we're running on Android 5.0 or higher
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) {
    // Call some material design APIs here
} else {
    // Implement this feature without material design
}

Keeping Previous Themes

Just because you can do something doesn't always mean that you should. There is absolutely nothing wrong with the Holo theme that Android has been using since Honeycomb. You can provide alternative layouts and themes and have them apply to different API levels. For example, you could have the Material Design theme apply to any devices with an API of 5.0 and above. The Holo theme will apply to any device with an API of 3.0 and above. Finally, the classic theme could be applied to all devices below 3.0.

To do this, you would use the following directories in your project:

  • res/values/ (default location)
  • res/values-v11/ (for 3.0 +)
  • res/values-v21/ (for 5.0 +)

In each directory, you can place a styles.xml file that will define the desired theme.

Support Libraries    

The V7 r21 support libraries support several widgets and features from Material Design.

Theme.AppCompat enables the use of the color palette by extending one of the AppCompat themes. For example, Theme.AppCompat.Light:

<style name="Theme.MyTheme" parent="Theme.AppCompat.Light"><item name="colorPrimary">@color/material_blue_500</item><item name="colorPrimaryDark">@color/material_blue_700</item><item name="colorAccent">@color/material_green_A200</item></style>

It also provides Material Design widgets for the following:

  • EditText
  • CheckBox
  • Spinner
  • RadioButton
  • SwitchCompat
  • CheckedTextView

The V7 support library also gives access to the new CardView and RecyclerView widgets.

If you stick with AppCompat in your layout designs, it's possible to create a single layout that will maintain the same visuals throughout multiple API levels.

To use the V7 support library, you need to add it your project. If you're using Android Studio and Gradle, it can be added to your dependencies section in the build.gradle file:

dependencies {
    compile 'com.android.support:appcompat-v7:21.0.+'
    compile 'com.android.support:cardview-v7:21.0.+'
    compile 'com.android.support:recyclerview-v7:21.0.+'
} 

When including the v7 support library you must set your minSdkVersion to 7.

Conclusion

Android 5.0 is a major release. Updates such as ART and on-screen notifications will make an immediate impact. Other updates such as Material Design, Overview and, Job Scheduling will take time for developers to implement and adopt.

The users will also play a large role in shaping the near future of Android. Previous attempts at bringing Android to the TV space have not been well received. Smart TVs on the whole have yet to become a must-have device.

Having a unified and familiar user experience across multiple devices and screens is exciting and in my opinion necessary going forward. The success of this, though, will ultimately depend on adoption and user demand.

Google set the stage at this year's Google I/O and with Lollipop the actors are now assembled. Regardless of how long the play runs for and the plaudits it receives, no one can say that Google hasn't tried.

2014-12-29T16:45:01.000Z2014-12-29T16:45:01.000ZLeif Johnson

What's New in Android Lollipop

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-22806

After months of speculation, hype and teasing, Google officially released Android 5.0 to the world on 12 November 2014. The SDK was made available on 3 November. We already knew a lot about the features, due to the L preview SDK that was released on 25 June 2014. What we didn't know was what the L would stand for.

Lollipop was the name given to the 5.0 update. Looking back, Lollipop was a clear favorite. But, if given enough time to speculate, the obvious once again holds the power to surprise.

Lollipop is a significant update for the Android platform. It's arguably the biggest release to date, and certainly the most ambitious.

1. Features

Now that the Android SDK is out, here's a roundup of some of the new features in Android 5.0.

Battery

Project Volta   

In recent years, Google has focused with every Android release on a particular development aspect of the operating system and how it can be improved. For 5.0, it was improving battery life.

The JobScheduler API allows you to run jobs asynchronously at a later time or under particular conditions.JobInfo objects can be used to define the conditions a scheduled job will run under.

Thanks to these additions, developers now have a lot more control over when and how battery-draining tasks are performed. 

Developer Tools

There is a new ADB command dumpsys batterystats that can be used to generate statistical data about battery usage on a device. Take a look at the following command to see how this works.

adb shell dumpsys batterystats --charged <package-name>

Notifications

In Lollipop, notifications can be displayed on the lock screen. Developers can specify the amount of information displayed within a notification via setVisibility, which accepts the following values:

  • VISIBILITY_PRIVATE: shows basic information, such as the notification's icon, but hides the notification's content
  • VISIBILITY_PUBLIC: shows the notification's content
  • VISIBILITY_SECRET: shows nothing, excluding even the notification's icon

Metadata can now be added to notifications to allow for categories and priority, and to collect additional contacts.

Key notifications, such as incoming calls, will appear in a heads-up notification window, which will float at the top of the current app until the user acknowledges or dismisses the notification.

Multitasking

The recents screen has been renamed to overview. With the new name come new APIs that improve multitasking options on Android. You can now have your activities treated as tasks and be shown in their own window in the overview screen.

For example, a web browser app could be set so that each tab has its own window. In the previous recents screen, a single browser app would have been displayed.

If you have a website, you can add <meta name="theme-color" content="#3F51B5"> to your header section to have overview display the given color as the header for your website.

Runtime and ART

Previous versions of Android have all used Dalvik as the process virtual machine. Applications are commonly written in Java, which is then compiled to bytecode. This is then translated to Dalvik bytecode and stored in .dex and .odex files, for Dalvik to then process. 

This is a very a basic explanation of what the runtime is doing and hopefully conveys its importance. Applications run on the process virtual machine so its performance determines the overall performance of the app and is a bottleneck.

Dalvik uses JIT (Just In Time) compilation, meaning that it only runs the application at the moment it is needed.

ART, on the other hand, uses an AOT (Ahead Of Time) compilation to compile the bytecode. When an application is installed, it's compiled by ART's dex2oatutility, which creates ELF executables instead of .odex files. From then on, the application is executed from the already compiled ELF executable.

That's a lot of saved compiling at the expense of longer application install times and some extra disk space.

With the addition of improved garbage collection (GC), ART outperforms Dalvik in nearly every way, making for a sharper and more fluid Android experience.

Android TV

To help bring your app to large screen displays, Lollipop introduces the Leanback UI and the Android TV Input Framework (TIF). The Leanback library provides user interface widgets for TV apps. TIF is designed to allow TV apps to handle video streams from sources such as HDMI inputs, TV tuners, and IPTV receivers.

Graphics

Khronos OpenGL ES 3.1 has been added. Key features include:

  • compute shaders
  • separate shader objects
  • shading language improvements
  • extensions for advanced blend modes and debugging
  • indirect draw commands
  • multisample and stencil textures

Android 5.0 remains backwards compatible with OpenGL ES 2.0 and 3.0.

Android Extension Pack (AEP)

To supplement OpenGL ES 3.1, a set of OpenGL ES extensions have been added that allow for the following:

  • guaranteed fragment shader support for shader storage buffers, images, and atomics (fragment shader support is optional in OpenGL ES 3.1)
  • different blend modes for each color attachment in a frame buffer
  • tessellation and geometry shaders
  • ASTC (LDR) texture compression format
  • per-sample interpolation and shading

Chrome View

Android Lollipop includes a new version of Chromium for Web View, based on the Chromium m37 release that adds support for WebAudio, WebRTC, and WebGL.

Native support for Web Components is also included in the update and will allow for the use of Polymer and its Material Design elements without requiring polyfills.

As of Android 5.0, Chromium is now updatable from the Play Store so new APIs and bug fixes will be available immediately and will no longer require an update of the Android operating system.

Media Browsing    

The new android.media.browse API allows apps to browse the media content library of other apps. The MediaBrowserService class is used to expose media in an application, while the MediaBrowser class is used to interact with a media browser service.

Media Playback Control

Two new classes have been introduced to make playback control simpler to manage across different UIs and services.

MediaSession replaces RemoteControlClient. It provides a set of callback methods for use in transport controls and media buttons. MediaController can be used to create a custom media controller app, which can then be used to send commands to a MediaSession.

New Sensors

Two new sensors have been introduced:

  • Tilt Detector: improves activity recognition
  • Heart Rate Sensor: capable of reporting the heart rate of the user touching the device

Of course, both of these sensors require supported hardware.

Managed Provisioning

Device administrators can use a managed provisioning service to add apps to a separate managed profile. If there's an existing personal account on a device that has been provisioned, the managed profile apps will appear alongside the existing applications.

Device Owner

A device owner is a specialised type of device administrator that can create and remove secondary users and configure global settings, essentially giving Android a traditional administrator and user account system.

Screen Pinning

Screen pinning is a new feature that is comparable to kiosk mode on iOS. Screen pinning includes the following features:

  • The status bar is blank.
  • Other apps cannot launch new activities.
  • User notifications and status information are hidden.
  • The current app can create new activities as long as no new tasks are created.

Screen pinning can be activated manually via Settings> Security > Screen Pinning. It can also be activated programmatically. The startLockTask method can be called from your app to activate screen pinning. If the app is not from a device owner, a confirmation prompt will be shown. The setLockTaskPackages method can be called by an owner app and will avoid the confirmation prompt.

To deactivate screen pinning, you need to call stopLockTask if it was initiated by a device owner app. If it was activated by a non-device owner, the user can exit screen pinning mode by holding both the back and recents buttons.

Screen Sharing

Screen capturing is now possible through the newandroid.media.projection APIs. The create VirtualDisplay method allows the calling app to capture the screen into a surface object, which can then be sent across the network. The API can only capture non-secure content and does not include audio.

Camera

RAW image capturing has finally arrived on Android, thanks to the new android.hardware.camera2 API.

Bluetooth Low Energy

Android devices can now act asBluetooth LE peripherals. Apps can make use of this to make their presence known to nearby devices. With the new android.bluetooth.le APIs, you can enable your apps to connect to nearby Bluetooth devices, broadcast advertisements, and scan for responses. These new features also come with a new manifest permission, BLUETOOTH_ADMIN.

These APIs will be extremely useful when working with wearable devices, health and fitness apps, and monitoring apps. All of these are predicted growth areas for Android in the near future.

NFC

NFC has been improved in the several ways:

  • Android Beam is now an option in the share menu.
  • invokeBeam can be used to initiate the sharing of data. You no longer have to physically bump devices.
  • registerAidsForService and setPreferredService have been added to aid the development of payment apps.

Multiple Network Connections

New APIs allow for apps to query networks for available features, such as whether the network is cellular, metered or Wi-Fi.

Printing Framework

Bitmap images can now be rendered from PDF document pages, using the new PdfRendered class.

Input Method Editors (IME)

You can now cycle through different IMEs available to the platform. This is accomplished by using the shouldOffetToNextInputMethod method.

2. Material Design

One of the biggest features of Android 5.0 is Material Design. Material Design is a set of guidelines relating to visual design, content motion, and user interaction. The guidelines are intended to go beyond Android and are designed for a wide array of devices and platforms.

Polymer is a notable example of the cross-platform nature of Material Design, with Google creating several Material Design web elements to aid in construction of websites/web apps with a Material Design theme. Despite its cross-platform nature, Material Design still remains a focal point of Android 5.0.

New Widgets

Lollipop introduced two new widgets:

  • CardView: This widget allows for information to be grouped together in a consistent manner. The card itself can have its depth altered to promote or highlight it as needed.
  • RecyclerView: This is a more advanced version of theListView widget.

New Themes    

There are two new themes that make use of Material Design principles, Dark Material and Light Material. Both apply new user interface system widgets. System widgets are easy to customize and you can set their color palette. Several animations and transitions are also defaults of these themes, such as the ripple effect.

Depth and Shadow

Depth can now be altered on Android views through the new Z property. Higher Z values cast larger shadows around the view, giving the appearance of increased elevation. This is a staple of the Material Design ethos where the goal is to create a textile appearance through the use of layers.

Animations

Another staple of Material Design is animation. Touch feedback animations and a host of activity transitions all aid in creating a tactile and immersive experience. The goal is not to have information pop out or disappear. Every view/object should appear as a layer on a surface.

Imagine a nice, clean, white desk. On this desk you have various papers, post-it notes and stationery. When you look down at the desk, it’s not a flat view. The desk contains several layers, and objects have different depths and cast shadows on the layer beneath.

If you need to see a page underneath another page, you must move the covering page out of the way. If you want to place your laptop on the desk, you need to slide the existing papers out of the way to make space. When you touch something on your desk, it moves, bends, vibrates, and shuffles.

3. Using Android 5.0

To get started with Android 5.0, download the SDK platform for v21 in your preferred IDE. This will most likely be done through the SDK manager in Eclipse or Android Studio.

In the ApplicationManifest.xml file and/or build.gradle file, set the targetSdkVersion to 21.

Important changes and considerations:

There's a saying in the superhero world, "With great power, comes great responsibility." There is a similar one in the development community, "With large updates, comes extensive testing."

Here's a quick checklist, if you already have an Android app:

  • Does my app run issue-free on ART?
  • If my app uses notifications, how will they be integrated into the lock screen?
  • Can the user interface benefit from a refresh? Is Material Design a good fit and how much work will it involve?
  • The RemoteControlClient class is now deprecated so should I move over to the MediaSession API?
  • WebView now blocks mixed content and third party cookies by default. Do I need to use setMixedContentMode and setAcceptThirdPartyCookies?

A complete list list can be found on the Android Developer website

4. Backwards Compatibility

One of the biggest changes in Android 5.0 is the user interface with the introduction of Material Design. Making use of Material Design and putting best design practices to use, takes a lot of consideration and work on the part of the developer.

For existing apps, developers are faced with further challenges, such as how to leverage the new features of 5.0 whilst maintaining backwards compatibility, providing a consistent user experience across different API levels.

To show how to use Android 5.0 and Material Design in your project, I've created a simple app. It consists of a single activity that displays several widgets. I have then added the following to the res/ folder:

  • menu-v21/: This contains a copy of the menu_main.xml and will be used to display Material Design icons on Android 5.0 devices.
  • values-v11/: This contains a styles.xml file that sets the base theme to holo.light for all devices running Android 3.0 or above. Appearance changes to the action bar have also been made in this file.
  • values-v21/: This contains a styles.xml file that sets the base theme to material.lightfor devices running on Android 5.0 and aboveIt also defines the base colors.

The below image shows the app running on a 4.4.2 device and a 5.0 device. The Material theme has been applied for 5.0+ devices. Other devices will receive the holo.light theme. It shows the default state of both themes and the user interface differences between them.

Color and Action Bar 

With Material Design, defining your app's base colors to fit in with your brand has never been easier. For example, adding the below code to your theme will set the notificationbar background, the action bar background, and user interface widgets.

<!-- Base application theme. --><style name="AppTheme" parent="android:Theme.Material.Light"><!-- Customize your theme here. --><!-- Main theme colors --><!--   your app branding color for the app bar --><item name="android:colorPrimary">#0d7963</item><!--   darker variant for the status bar and contextual app bars --><item name="android:colorPrimaryDark">#ff0d5b47</item><!--   theme UI controls like checkboxes and text fields --><item name="android:colorAccent">#0d7963</item></style>

The results can be very striking and the app can become identifiable with just a glance. There's also a new set of Material Design icons, which are another quick and easy way to bring a modern user interface feel to any existing app.

Here's an example of the difference made by using Material Design icons and defining the main theme colors:

The use of the action bar and color is a dominating feature of Material Design and can effectively brand and distinguish your app. One way to provide a consistent user experience across different API levels is to replicate these features over to styles and themes intended for different API levels.

For example, if we compare the application running on a 4.4.2 device to a 5.0 device:

As you can see, they have a very distinctive look to them. To improve this, we can use the same Material Design icons on API levels lower than 5.0. We can also style the action bar so that it resembles the Material Design version.

For the icons, we can change the images in res/menu/menu-main.xml to Material Design icons. For the action bar, we can edit the res/values-v11/styles.xml file to look like the following:

<resources><!-- Base application theme. --><style name="AppTheme" parent="android:Theme.Holo.Light"><!-- Customize your theme here. --><item name="android:actionBarStyle">@style/MyActionBar</item></style><style name="MyActionBar" parent="@android:style/Widget.Holo.Light.ActionBar"><item name="android:background">#0d7963</item></style></resources>

Here's another look at the two compared, after the changes:

The version running on the 4.4.2 device becomes more recognizable as our application and our brand. Without any significant changes, the app already looks more consistent across the different APIs and has a more modern feel to it.

Using Non-Supported Features

Certain features are exclusive to Android Lollipop, most notably the activity transitions and the reveal animations. This does not necessarily mean that you have to forgo using them or create a separate app that makes use of them. You can check the system version at runtime and only perform certain API calls if the app is running on an appropriate version of Android.

An example to check if the system is 5.0+:

// Check if we're running on Android 5.0 or higher
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) {
    // Call some material design APIs here
} else {
    // Implement this feature without material design
}

Keeping Previous Themes

Just because you can do something doesn't always mean that you should. There is absolutely nothing wrong with the Holo theme that Android has been using since Honeycomb. You can provide alternative layouts and themes and have them apply to different API levels. For example, you could have the Material Design theme apply to any devices with an API of 5.0 and above. The Holo theme will apply to any device with an API of 3.0 and above. Finally, the classic theme could be applied to all devices below 3.0.

To do this, you would use the following directories in your project:

  • res/values/ (default location)
  • res/values-v11/ (for 3.0 +)
  • res/values-v21/ (for 5.0 +)

In each directory, you can place a styles.xml file that will define the desired theme.

Support Libraries    

The V7 r21 support libraries support several widgets and features from Material Design.

Theme.AppCompat enables the use of the color palette by extending one of the AppCompat themes. For example, Theme.AppCompat.Light:

<style name="Theme.MyTheme" parent="Theme.AppCompat.Light"><item name="colorPrimary">@color/material_blue_500</item><item name="colorPrimaryDark">@color/material_blue_700</item><item name="colorAccent">@color/material_green_A200</item></style>

It also provides Material Design widgets for the following:

  • EditText
  • CheckBox
  • Spinner
  • RadioButton
  • SwitchCompat
  • CheckedTextView

The V7 support library also gives access to the new CardView and RecyclerView widgets.

If you stick with AppCompat in your layout designs, it's possible to create a single layout that will maintain the same visuals throughout multiple API levels.

To use the V7 support library, you need to add it your project. If you're using Android Studio and Gradle, it can be added to your dependencies section in the build.gradle file:

dependencies {
    compile 'com.android.support:appcompat-v7:21.0.+'
    compile 'com.android.support:cardview-v7:21.0.+'
    compile 'com.android.support:recyclerview-v7:21.0.+'
} 

When including the v7 support library you must set your minSdkVersion to 7.

Conclusion

Android 5.0 is a major release. Updates such as ART and on-screen notifications will make an immediate impact. Other updates such as Material Design, Overview and, Job Scheduling will take time for developers to implement and adopt.

The users will also play a large role in shaping the near future of Android. Previous attempts at bringing Android to the TV space have not been well received. Smart TVs on the whole have yet to become a must-have device.

Having a unified and familiar user experience across multiple devices and screens is exciting and in my opinion necessary going forward. The success of this, though, will ultimately depend on adoption and user demand.

Google set the stage at this year's Google I/O and with Lollipop the actors are now assembled. Regardless of how long the play runs for and the plaudits it receives, no one can say that Google hasn't tried.

2014-12-29T16:45:01.000Z2014-12-29T16:45:01.000ZLeif Johnson

Create a Space Invaders Game in Corona: Project Setup

$
0
0
Final product image
What You'll Be Creating

In this three-part series, I will be showing you how to create a game inspired by the popular seventies game, Space Invaders. Along the way, you'll learn about Corona's scene management functionality, timers, moving a character, the built-in physics engine, and how to use modules to emulate classes in the Lua programming language.

1. New Project

Open the Corona Simulator, click New Project, and configure the project as shown below.
Select a location to save your project and click OK. This will create a folder with a number of icons and three files that are important to us, main.lua, config.lua, and build.settings.
We'll take a look at each file in the next few steps.

2. Build Settings

The build.settings file is responsible for the build time properties of the project.
Open this file, remove its contents, and populate it with the following configuration.

settings =
{
    orientation =
    {
        default ="portrait",
        supported =
        {
          "portrait"
        },
    },
}

In build.settings, we are setting the default orientation and restricting the application
to only support a portrait orientation. You can learn which other settings you can include in
build.settings by exploring the Corona documentation.

3. Application Configuration

The config.lua file handles the application's configuration. As we did with build.settings,
open this file, remove its contents, and add the following configuration.

application =
{
    content =
    {
      width = 768,
      height = 1024,
      scale = "letterbox",
      fps = 30,
    }
}

This sets the default width and height of the screen, uses letterbox to scale the images,
and sets the frame rate to 30. Visit the Corona documentation to learn more about the other properties you can set in config.lua.

4. Entry Point

The main.lua file is the file that the application loads first and uses to bootstrap the application. We will be using main.lua to set a few default settings for the application and use the Composer library to load the first screen.

If you're not familiar with Corona's Composer library, then I recommend giving the
documentation a quick read. In short, Composer is the built-in solution to scene (screen) creation and management in Corona. The library provides developers with an easy way to create and transition between individual scenes.

The newer Composer module replaces the older and now deprecated StoryBoard module. A migration guide is available to help convert your old projects over to use Composer.

5. Hide Status Bar

We don't want the status bar showing in our application. Add the following code snippet to main.lua to hide the status bar.

display.setStatusBar(display.HiddenStatusBar)

6. Set Default Anchor Points

To set the default anchor or registration points add the following code block to main.lua.

display.setDefault( "anchorX", 0.5)
display.setDefault( "anchorY", 0.5)


The anchorX and anchorY properties specify where you want the registration point of your display objects to be. Note that the value ranges from 0.0 to 1.0. For example, if you'd want the registration point to be the top left of the display object, then you'd set both properties to 0.0.

7. Seed Random Generator

Our game will be using Lua's math.random function to generate random numbers. To make sure that the numbers are truly random each time the application runs, you must provide a seed value. If you don't provide a seed value, the application will generate the same randomness every time.

A good seed value is Lua's os.time function since it will be different each time the
application is run. Add the following code snippet to main.lua.

math.randomseed( os.time() )

8. Avoiding Globals

When using Corona, and specifically the Lua programming language, one way to have access to variables application-wide is to use global variables.The way you declare a global variable is by leaving off the keyword local in front of the variable declaration.

For example, the following code block declares two variables. The first one is a local variable that would only be available in the code block it is defined in. The second one is a global variable that is available anywhere in the application.

local iamalocalvariable = "local"
iamaglobalvariable = "global"

It is generally considered bad practice to use global variables. The most prevalent reason is to avoid naming conflicts, that is, having two variables with the same name. We can solve this problem by using modules. Create a new Lua file, name it gamedata.lua, and add the following code to it.

M = {}
return M

We simply create a table and return it. To utilize this, we use Lua's require method. Add the following to main.lua.

local gameData = require( "gamedata" )

We can then add keys to gameData, which will be the faux global variables. Take a look at the following example.

gameData.invaderNum = 1 -- Used to keep track of the Level we are on
gameData.maxLevels = 3 -- Max number of Levels the game will have
gameData.rowsOfInvaders = 4 -- How many rows of Invaders to create

Whenever we want to access these variables, all we have to do is use the require function to load gamedata.lua. Every time you load a module using Lua's require function, it adds the moduleto a package.loaded table. If you load a module, the package.loaded table is checked first to see if the module is already loaded. If it is, then it uses the cached module instead of loading it again.

9. Require Composer

Before we can use the Composer module, we must first require it. Add the following to main.lua.

local composer = require( "composer" )

10. Load the Start Scene

Add the following code snippet to main.lua. This will make the application go to the scene named start, which is also a Lua file, start.lua. You don't need to append the file extension when calling the gotoScene function.

composer.gotoScene( "start" )

11. Start Scene

Create a new Lua file named start.lua in the project's main directory. This will be a composer file, which means we need to require the Composer module and create a composer scene. Add the following snippet to start.lua.

local composer = require( "composer" )
local scene = composer.newScene()
return scene

The call to newScene makes start.lua part of composer's scene hierarchy. This means that it becomes a screen within the game, which we can call composer methods on.

From here on out, the code added to start.lua should be placed above the return statement.

11. Local Variables

The following are the local variables we will need for the start scene.

local startButton -- used to start the game 
local pulsatingText = require("pulsatingtext") -- A module providing a pulsating text effect 
local starFieldGenerator = require("starfieldgenerator") -- A module that generates the starFieldGenerator 
local starGenerator -- An instance of the starFieldGenerator

It's important to understand that local variables in the main chunk only get called once,
when the scene is loaded for the first time. When navigating through the composer scenes, for example, by invoking methods like gotoScence, the local variables will already be initialized.

This is important to remember if you want the local variables to be reinitialized when
navigating back to a particular scene. The easiest way to do this is to remove the scene from the composer hierarchy by calling the removeScence method. The next time you navigate to that scene, it will be automatically reloaded. That's the approach we'll be taking in this tutorial.

ThepulsatingText and starFieldGenerator are two custom modules we will create to add class-like functionality to the project. Create two new files in your project folder named pulsatingtext.lua and starfieldgenerator.lua.

12. Storyboard Events

If you've taken the time to read the documentation on Composer, which I linked to earlier,
you will have noticed the documentation includes a template that contains every possible
composer event. The comments are very useful as they indicate which events to leverage for initializing assets, timers, etc. We are interested in the scene:create, scene:show, and scene:hide methods for this tutorial.

Step 1: scene:create

Add the following code snippet to start.lua.

function scene:create( event )
    local group = self.view
	startButton = display.newImage("new_game_btn.png",display.contentCenterX,display.contentCenterY+100)
    group:insert(startButton)
end

This method is called when the scene's view doesn't exist yet. This is where you should initialize the display objects and add them to the scene. The group variable is pointing to self.view, which is a GroupObject for the entire scene.

We create the startButton by using the Display object's newImage method, which takes as its parameters the path to the image and the x and y values for the image's position on screen.

Step 2: scene:show

Composer's scene:show method has two phases. The will phase is called when the scene is still off-screen, but is about to come on-screen. The did phase is called when the scene is on-screen. This is where you want to add code to make the scene come alive, start timers, add event listeners, play audio, etc.

In this tutorial we are only interested in the did phase. Add the following code snippet to start.lua.

function scene:show( event )
    local phase = event.phase
    local previousScene = composer.getSceneName( "previous" )
	if(previousScene~=nil) then
		composer.removeScene(previousScene)
	end
   if ( phase == "did" ) then
       startButton:addEventListener("tap",startGame)
   end
end

We declare a local variable phase, which we use to check which phase the show method is in. Since we will be coming back to this scene later in the game, we check to see if there is a previous scene and, if so, remove it. We add a tap listener to the startButton that calls the startGame function.

Step 3: scene:hide

Composer's scene:hide method also has two phases. The will phase is called when the scene is on-screen, but is about to go off-screen. Here you will want to stop any timers, remove event listeners, stop audio, etc. The did phase is called once the scene has gone off-screen.

In this tutorial, we are only interested in the will phase in which we remove the tap listener from the startButton.

function scene:hide( event )
    local phase = event.phase
	if ( phase == "will" ) then
		startButton:removeEventListener("tap",startGame)
	end
end

16. Start Game

The startGame function is called when the user taps the startButton. In this function, we invoke the gotoScene composer method, which will take us to the gamelevel scene.

function startGame()
    composer.gotoScene("gamelevel")
end

17. Game Level Scene

Create a new file named gamelevel.lua and add the following code to it. This should look familiar. We are creating a new scene and returning it.

local composer = require("composer")
local scene = composer.newScene()

return scene

18. Add Scene Listeners

We need to add scene listeners for the create, show, and hide methods. Add the following code to start.lua.

scene:addEventListener( "create", scene )
scene:addEventListener( "show", scene )
scene:addEventListener( "hide", scene )

19. Test Progress

If you test the game now, you should see a black screen with a button you can tap. Tapping the button should take you to the gamelevel scene, which is now just a blank screen.

Conclusion

This brings this part of the series to a close. In the next part, we will start implementing the game's gameplay. Thanks for reading and see you in the second part of this series.

2014-12-31T15:20:08.000Z2014-12-31T15:20:08.000ZJames Tyner

Create a Space Invaders Game in Corona: Project Setup

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-22752
Final product image
What You'll Be Creating

In this three-part series, I will be showing you how to create a game inspired by the popular seventies game, Space Invaders. Along the way, you'll learn about Corona's scene management functionality, timers, moving a character, the built-in physics engine, and how to use modules to emulate classes in the Lua programming language.

1. New Project

Open the Corona Simulator, click New Project, and configure the project as shown below.
Select a location to save your project and click OK. This will create a folder with a number of icons and three files that are important to us, main.lua, config.lua, and build.settings.
We'll take a look at each file in the next few steps.

2. Build Settings

The build.settings file is responsible for the build time properties of the project.
Open this file, remove its contents, and populate it with the following configuration.

settings =
{
    orientation =
    {
        default ="portrait",
        supported =
        {
          "portrait"
        },
    },
}

In build.settings, we are setting the default orientation and restricting the application
to only support a portrait orientation. You can learn which other settings you can include in
build.settings by exploring the Corona documentation.

3. Application Configuration

The config.lua file handles the application's configuration. As we did with build.settings,
open this file, remove its contents, and add the following configuration.

application =
{
    content =
    {
      width = 768,
      height = 1024,
      scale = "letterbox",
      fps = 30,
    }
}

This sets the default width and height of the screen, uses letterbox to scale the images,
and sets the frame rate to 30. Visit the Corona documentation to learn more about the other properties you can set in config.lua.

4. Entry Point

The main.lua file is the file that the application loads first and uses to bootstrap the application. We will be using main.lua to set a few default settings for the application and use the Composer library to load the first screen.

If you're not familiar with Corona's Composer library, then I recommend giving the
documentation a quick read. In short, Composer is the built-in solution to scene (screen) creation and management in Corona. The library provides developers with an easy way to create and transition between individual scenes.

The newer Composer module replaces the older and now deprecated StoryBoard module. A migration guide is available to help convert your old projects over to use Composer.

5. Hide Status Bar

We don't want the status bar showing in our application. Add the following code snippet to main.lua to hide the status bar.

display.setStatusBar(display.HiddenStatusBar)

6. Set Default Anchor Points

To set the default anchor or registration points add the following code block to main.lua.

display.setDefault( "anchorX", 0.5)
display.setDefault( "anchorY", 0.5)


The anchorX and anchorY properties specify where you want the registration point of your display objects to be. Note that the value ranges from 0.0 to 1.0. For example, if you'd want the registration point to be the top left of the display object, then you'd set both properties to 0.0.

7. Seed Random Generator

Our game will be using Lua's math.random function to generate random numbers. To make sure that the numbers are truly random each time the application runs, you must provide a seed value. If you don't provide a seed value, the application will generate the same randomness every time.

A good seed value is Lua's os.time function since it will be different each time the
application is run. Add the following code snippet to main.lua.

math.randomseed( os.time() )

8. Avoiding Globals

When using Corona, and specifically the Lua programming language, one way to have access to variables application-wide is to use global variables.The way you declare a global variable is by leaving off the keyword local in front of the variable declaration.

For example, the following code block declares two variables. The first one is a local variable that would only be available in the code block it is defined in. The second one is a global variable that is available anywhere in the application.

local iamalocalvariable = "local"
iamaglobalvariable = "global"

It is generally considered bad practice to use global variables. The most prevalent reason is to avoid naming conflicts, that is, having two variables with the same name. We can solve this problem by using modules. Create a new Lua file, name it gamedata.lua, and add the following code to it.

M = {}
return M

We simply create a table and return it. To utilize this, we use Lua's require method. Add the following to main.lua.

local gameData = require( "gamedata" )

We can then add keys to gameData, which will be the faux global variables. Take a look at the following example.

gameData.invaderNum = 1 -- Used to keep track of the Level we are on
gameData.maxLevels = 3 -- Max number of Levels the game will have
gameData.rowsOfInvaders = 4 -- How many rows of Invaders to create

Whenever we want to access these variables, all we have to do is use the require function to load gamedata.lua. Every time you load a module using Lua's require function, it adds the moduleto a package.loaded table. If you load a module, the package.loaded table is checked first to see if the module is already loaded. If it is, then it uses the cached module instead of loading it again.

9. Require Composer

Before we can use the Composer module, we must first require it. Add the following to main.lua.

local composer = require( "composer" )

10. Load the Start Scene

Add the following code snippet to main.lua. This will make the application go to the scene named start, which is also a Lua file, start.lua. You don't need to append the file extension when calling the gotoScene function.

composer.gotoScene( "start" )

11. Start Scene

Create a new Lua file named start.lua in the project's main directory. This will be a composer file, which means we need to require the Composer module and create a composer scene. Add the following snippet to start.lua.

local composer = require( "composer" )
local scene = composer.newScene()
return scene

The call to newScene makes start.lua part of composer's scene hierarchy. This means that it becomes a screen within the game, which we can call composer methods on.

From here on out, the code added to start.lua should be placed above the return statement.

11. Local Variables

The following are the local variables we will need for the start scene.

local startButton -- used to start the game 
local pulsatingText = require("pulsatingtext") -- A module providing a pulsating text effect 
local starFieldGenerator = require("starfieldgenerator") -- A module that generates the starFieldGenerator 
local starGenerator -- An instance of the starFieldGenerator

It's important to understand that local variables in the main chunk only get called once,
when the scene is loaded for the first time. When navigating through the composer scenes, for example, by invoking methods like gotoScence, the local variables will already be initialized.

This is important to remember if you want the local variables to be reinitialized when
navigating back to a particular scene. The easiest way to do this is to remove the scene from the composer hierarchy by calling the removeScence method. The next time you navigate to that scene, it will be automatically reloaded. That's the approach we'll be taking in this tutorial.

ThepulsatingText and starFieldGenerator are two custom modules we will create to add class-like functionality to the project. Create two new files in your project folder named pulsatingtext.lua and starfieldgenerator.lua.

12. Storyboard Events

If you've taken the time to read the documentation on Composer, which I linked to earlier,
you will have noticed the documentation includes a template that contains every possible
composer event. The comments are very useful as they indicate which events to leverage for initializing assets, timers, etc. We are interested in the scene:create, scene:show, and scene:hide methods for this tutorial.

Step 1: scene:create

Add the following code snippet to start.lua.

function scene:create( event )
    local group = self.view
	startButton = display.newImage("new_game_btn.png",display.contentCenterX,display.contentCenterY+100)
    group:insert(startButton)
end

This method is called when the scene's view doesn't exist yet. This is where you should initialize the display objects and add them to the scene. The group variable is pointing to self.view, which is a GroupObject for the entire scene.

We create the startButton by using the Display object's newImage method, which takes as its parameters the path to the image and the x and y values for the image's position on screen.

Step 2: scene:show

Composer's scene:show method has two phases. The will phase is called when the scene is still off-screen, but is about to come on-screen. The did phase is called when the scene is on-screen. This is where you want to add code to make the scene come alive, start timers, add event listeners, play audio, etc.

In this tutorial we are only interested in the did phase. Add the following code snippet to start.lua.

function scene:show( event )
    local phase = event.phase
    local previousScene = composer.getSceneName( "previous" )
	if(previousScene~=nil) then
		composer.removeScene(previousScene)
	end
   if ( phase == "did" ) then
       startButton:addEventListener("tap",startGame)
   end
end

We declare a local variable phase, which we use to check which phase the show method is in. Since we will be coming back to this scene later in the game, we check to see if there is a previous scene and, if so, remove it. We add a tap listener to the startButton that calls the startGame function.

Step 3: scene:hide

Composer's scene:hide method also has two phases. The will phase is called when the scene is on-screen, but is about to go off-screen. Here you will want to stop any timers, remove event listeners, stop audio, etc. The did phase is called once the scene has gone off-screen.

In this tutorial, we are only interested in the will phase in which we remove the tap listener from the startButton.

function scene:hide( event )
    local phase = event.phase
	if ( phase == "will" ) then
		startButton:removeEventListener("tap",startGame)
	end
end

16. Start Game

The startGame function is called when the user taps the startButton. In this function, we invoke the gotoScene composer method, which will take us to the gamelevel scene.

function startGame()
    composer.gotoScene("gamelevel")
end

17. Game Level Scene

Create a new file named gamelevel.lua and add the following code to it. This should look familiar. We are creating a new scene and returning it.

local composer = require("composer")
local scene = composer.newScene()

return scene

18. Add Scene Listeners

We need to add scene listeners for the create, show, and hide methods. Add the following code to start.lua.

scene:addEventListener( "create", scene )
scene:addEventListener( "show", scene )
scene:addEventListener( "hide", scene )

19. Test Progress

If you test the game now, you should see a black screen with a button you can tap. Tapping the button should take you to the gamelevel scene, which is now just a blank screen.

Conclusion

This brings this part of the series to a close. In the next part, we will start implementing the game's gameplay. Thanks for reading and see you in the second part of this series.

2014-12-31T15:20:08.000Z2014-12-31T15:20:08.000ZJames Tyner

Swift from Scratch: Optionals and Control Flow

$
0
0

In the previous articles, you learned some of the basic concepts of the Swift programming language. If you've programmed before, I'm sure you saw a few similarities with other programming languages, such as Ruby, JavaScript, and Objective-C.

In this article, we zoom in on control flow in Swift. Before we can discuss control flow in more detail, we need to take a look at a concept that is new to most of you, optionals. Optionals are another safety feature of Swift. At first, it may look like a hassle to use optionals, but you'll quickly learn that optionals will make your code much safer.

1. Optionals

We've already seen that a variable must be initialized before it can be used. Take a look at the following example to better understand what this means.

var str: String

str.isEmpty

If you're used to working with strings in Objective-C, then you may be surprised that Swift shows you an error. Let's see what that error tells us.

In many languages, variables have an initial default value. In Objective-C, for example, the string in the following code snippet is equal to nil.

NSString *newString;

However, the concept of nil differs in Swift and Objective-C. We'll discuss nil in more detail a bit later.

What is an optional?

Swift uses optionals to encapsulate an important concept, that is, a variable or constant has a value or it hasn't. It's that simple in Swift. To declare a variable or constant as optional, we append a question mark to the type of the variable or constant.

var str: String?

The variable str is no longer of type String. It is now of type optionalString. This is important to understand. The result or side effect is that we can no longer directly interact with the value of the str variable. The value is safely stored in the optional and we need to ask the optional for the value it encapsulates.

Forced Unwrapping

One way to access the value of an optional is through forced unwrapping. We can access the value of the variable str by appending an ! to the variable's name.

var str: String?

str = "Test"

println(str!)

It's important that you are sure that the optional contains a value when you force unwrap it. If the optional doesn't have a value and you force unwrap it, Swift will throw an error at you.

Optional Binding

There is a safer way to access the value of an optional. We'll take a closer look at if statements in a few minutes, but the following example shows how we can safely access the value stored in the variable str, which is of type optional String.

var str: String?

if str != nil {
    println(str!)
} else {
    println("str has no value")
}

We first check if the variable str is equal to nil before we print its contents. In this example, str doesn't have a value, which means it won't be forced unwrapped by accident.

There's a more elegant approach called optional binding. In the following example, we assign the value stored in the optional to a temporary constant, which is used in the if statement. The value of the optional str is bound to the constant strConst and used in the if statement. This approach also works for while statements.

var str: String?

str = "Test"

if let strConst = str {
    println(strConst)
} else {
    println("str has no value")
}

What is nil?

If you're coming from Objective-C, then you most certainly know what nil is. In Objective-C, nil is a pointer to an object that doesn't exist. Swift defines nil a bit differently and it's important that you understand the difference.

In Swift, nil means the absence of a value, any value. While nil is only applicable to objects in Objective-C, in Swift nil can be used for any type. It's therefore important to understand that an optional isn't the equivalent of nil in Objective-C. These concepts are very different.

2. Control Flow

Swift offers a number of common constructs to control the flow of the code you write. If you have any experience programming, then you'll have no problems getting up to speed with Swift's control flow constructs, conditional if and switch statements, and for and while loops.

However, Swift wouldn't be Swift if its control flow didn't slightly differ from, for example, Objective-C's control flow constructs. While the details are important, I'm sure they won't hinder you from getting up to speed with Swift. Let's start with the most common conditional construct, the if statement.

if

Swift's if statements are very similar to those found in Objective-C. The main difference is that there's no need to wrap the condition in parentheses. Curly braces, however, are mandatory. The latter prevents developers from introducing common bugs that are related to writing if statements without curly braces. This is what an if statement looks like in Swift.

let a = 10

if a > 10 {
    println("The value of \"a\" is greater than 10.")
} else {
    println("The value of \"a\" is less than or equal to 10.")
}

It should come to no surprise that Swift also defines an else clause. The code in the else clause is executed if the condition is equal to false. It's also possible to chain if statements as shown in the next example.

let a = 10

if a > 10 {
    println("The value of \"a\" is greater than 10.")
} else if a > 5 {
    println("The value of \"a\" is greater than 5.")
} else {
    println("The value of \"a\" is less than or equal to 5.")
}

There is one important note to make, that is, the condition of an if statement needs to return true or false. This isn't true for if statements in Objective-C. Take a look at the following if statement in Objective-C.

NSArray *array = @[];

if (array.count) {
    NSLog(@"The array contains one or more items.");
} else {
    NSLog(@"The array is empty.");
}

If we were to port the above code snippet to Swift, we would run into an error. The error isn't very informative, but Swift does tell us that we need to ensure the result of the condition evaluates to true or false.

The correct way to translate the above Objective-C snippet to Swift is by making sure the condition of the if statement evaluates to true or false, as in the following snippet.

let array = [String]()

if array.count > 0 {
    println("The array contains one or more items.")
} else {
    println("The array is empty.")
}

switch

Swift's switch statement is more powerful than its Objective-C equivalent. It's also safer as you'll learn in a moment. While there are some differences, switch statements in Swift adhere to the same concept as those in other programming languages, a value is passed to the switch statement and it is compared against possible matching patterns.

That's right, patterns. Like I said, a switch statement in Swift has a few tricks up its sleeve. We'll take a look at those tricks in a moment. Let's talk about safety first.

Exhaustive

A switch statement in Swift needs to be exhaustive, meaning that every possible value of the type that's handed to the switch statement needs to be handled by the switch statement. As in Objective-C, this is easily solved by adding a default case like in the following example.

let a = 10

switch a {
    case 0:
        println("a is equal to 0")
    case 1:
        println("a is equal to 1")
    default:
        println("a has another value")
}

Fallthrough

An important difference with Objective-C's implementation of switch statements is the lack of implicit fallthrough. The following example doesn't work in Swift for a few reasons.

let a = 10

switch a {
    case 0:
    case 1:
        println("a is equal to 1")
    default:
        println("a has another value")
}

The first case in which a is compared against 0 doesn't implicitly fall through to the second case in which a is compared against 1. If you add the above example to your playground, you'll notice that Swift throws an error at you. The error says that every case needs to include at least one executable statement.

Notice that the cases of the switch statement don't include break statements to break out of the switch statement. This isn't required in Swift since implicit fallthrough doesn't exist in Swift. This will eliminate a range of common bugs caused by unintentional fallthrough.

Patterns

The power of a switch statement in Swift lies in pattern matching. Take a look at the following example in which I've used ranges to compare the considered value against.

let a = 10

switch a {
    case 0..<5:
        println("The value of a lies between 0 and 4.")
    case 5...10:
        println("The value of a lies between 5 and 10.")
    default:
        println("The value of a is greater than 10.")
}

The ..< operator or half-open range operator defines a range from the first value to the second value, excluding the second value. The ... operator or closed range operator defines a range from the first value to the second value, including the second value. These operators are very useful in a wide range of situations.

You can also compare the considered value of a switch statement to tuples. Take a look at the following example to see how this works.

let latlng = (34.15, -78.03)

switch latlng {
case (0, 0):
    println("We're at the center of the planet.")
case (0...90, _):
    println("We're in the Northern hemisphere.")
case (-90...0, _):
    println("We're in the Southern hemisphere.")
default:
    println("The coordinate is invalid.")
}

As you can see in the above example, it is possible that the value matches more than one case. When this happens, the first matching case is chosen. The above example also illustrates the use of the underscore. As we saw in the previous article, we can use an underscore, _, to tell Swift which values we're not interested in.

Value Binding

Value binding is also possible with switch statements as the following example demonstrates. The second value of the tuple is temporarily bound to the constant description for use in the first and second case.

var response = (200, "OK")

switch response {
case (200..<400, let description):
    println("The request was successful with description \(description).")
case (400..<500, let description):
    println("The request was unsuccessful with description \(description).")
default:
    println("The request was unsuccessful with no description.")
}

for

The for loop is the first loop construct we'll take a look at. It behaves very similar to for loops in other languages. There are two flavors, the for loop and the for-in loop.

for

Swift's for loop is almost identical to a for loop in Objective-C as the following example illustrates. The for loop executes a number of statements until a predefined condition is met.

for var i = 0; i < 10; i++ {
    println("i is equal to \(i).")
}

As with if statements, there's no need to use parentheses to enclose the loop's initialization, condition, and increment definitions. The loop's statements, however, do need to be enclosed by curly braces.

for-in

The for-in loop is ideal for looping over the contents of a range or collection. In the following example, we loop over the elements of an array.

let numbers = [1, 2, 3, 5, 8]

for number in numbers {
    println("number is equal to \(number)")
}

We can also use for-in loops to loop over the key-value pairs of a dictionary. In the following example, we declare a dictionary and print its contents to the console. As we saw earlier in this series, the sequence of the key-value pairs is undefined since a dictionary is an unordered set of key-value pairs.

var bids = ["Tom": 100, "Bart": 150, "Susan": 120]

for (name, bid) in bids {
    println("\(name)'s bid is $\(bid).")
}

Each key-value pair of the dictionary is available in the for-in loop as a tuple of named constants. The for-in loop is also a great in combination with ranges. I'm sure you agree that the below snippet is easy to read and understand thanks to the use of a closed range.

for i in 1...10 {
    println("i is equal to \(i)")
}

while

The while loop also comes in two variations, while and do-while. The main difference is that the set of statements of a do-while loop is always executed at least once, because the condition of the do-while is evaluated at the end of each iteration. The following example illustrates this difference.

var c = 5
var d = 5

while c < d {
    println("c is smaller than d")
}

do {
    println("c is smaller than d")
} while c < d

The println statement of the while loop is never executed while that of the do-while loop is executed once.

In many cases, for loops can be rewritten as while loops and it's often up to the developer to determine which type of loop to use in a particular situation. The following for and while loops result in the same output.

for var i = 0; i < 10; i++ {
    println(i)
}

var i = 0

while i < 10 {
    println(i)
    i++
}

Conclusion

There's much more to control flow in Swift than what we've covered in this article, but you now have a basic understanding to continue your journey into Swift. I hope this tutorial has shown you that Swift's control flow implementation is very similar to that of other programming languages, with a twist.

In the rest of this series, we'll make more use of Swift's control flow constructs and you'll gradually get a better understanding of the subtle differences with languages like Objective-C. In the next installment of this series, we start exploring functions.

2015-01-02T18:45:04.000Z2015-01-02T18:45:04.000ZBart Jacobs

Swift from Scratch: Optionals and Control Flow

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-22874

In the previous articles, you learned some of the basic concepts of the Swift programming language. If you've programmed before, I'm sure you saw a few similarities with other programming languages, such as Ruby, JavaScript, and Objective-C.

In this article, we zoom in on control flow in Swift. Before we can discuss control flow in more detail, we need to take a look at a concept that is new to most of you, optionals. Optionals are another safety feature of Swift. At first, it may look like a hassle to use optionals, but you'll quickly learn that optionals will make your code much safer.

1. Optionals

We've already seen that a variable must be initialized before it can be used. Take a look at the following example to better understand what this means.

var str: String

str.isEmpty

If you're used to working with strings in Objective-C, then you may be surprised that Swift shows you an error. Let's see what that error tells us.

In many languages, variables have an initial default value. In Objective-C, for example, the string in the following code snippet is equal to nil.

NSString *newString;

However, the concept of nil differs in Swift and Objective-C. We'll discuss nil in more detail a bit later.

What is an optional?

Swift uses optionals to encapsulate an important concept, that is, a variable or constant has a value or it hasn't. It's that simple in Swift. To declare a variable or constant as optional, we append a question mark to the type of the variable or constant.

var str: String?

The variable str is no longer of type String. It is now of type optionalString. This is important to understand. The result or side effect is that we can no longer directly interact with the value of the str variable. The value is safely stored in the optional and we need to ask the optional for the value it encapsulates.

Forced Unwrapping

One way to access the value of an optional is through forced unwrapping. We can access the value of the variable str by appending an ! to the variable's name.

var str: String?

str = "Test"

println(str!)

It's important that you are sure that the optional contains a value when you force unwrap it. If the optional doesn't have a value and you force unwrap it, Swift will throw an error at you.

Optional Binding

There is a safer way to access the value of an optional. We'll take a closer look at if statements in a few minutes, but the following example shows how we can safely access the value stored in the variable str, which is of type optional String.

var str: String?

if str != nil {
    println(str!)
} else {
    println("str has no value")
}

We first check if the variable str is equal to nil before we print its contents. In this example, str doesn't have a value, which means it won't be forced unwrapped by accident.

There's a more elegant approach called optional binding. In the following example, we assign the value stored in the optional to a temporary constant, which is used in the if statement. The value of the optional str is bound to the constant strConst and used in the if statement. This approach also works for while statements.

var str: String?

str = "Test"

if let strConst = str {
    println(strConst)
} else {
    println("str has no value")
}

What is nil?

If you're coming from Objective-C, then you most certainly know what nil is. In Objective-C, nil is a pointer to an object that doesn't exist. Swift defines nil a bit differently and it's important that you understand the difference.

In Swift, nil means the absence of a value, any value. While nil is only applicable to objects in Objective-C, in Swift nil can be used for any type. It's therefore important to understand that an optional isn't the equivalent of nil in Objective-C. These concepts are very different.

2. Control Flow

Swift offers a number of common constructs to control the flow of the code you write. If you have any experience programming, then you'll have no problems getting up to speed with Swift's control flow constructs, conditional if and switch statements, and for and while loops.

However, Swift wouldn't be Swift if its control flow didn't slightly differ from, for example, Objective-C's control flow constructs. While the details are important, I'm sure they won't hinder you from getting up to speed with Swift. Let's start with the most common conditional construct, the if statement.

if

Swift's if statements are very similar to those found in Objective-C. The main difference is that there's no need to wrap the condition in parentheses. Curly braces, however, are mandatory. The latter prevents developers from introducing common bugs that are related to writing if statements without curly braces. This is what an if statement looks like in Swift.

let a = 10

if a > 10 {
    println("The value of \"a\" is greater than 10.")
} else {
    println("The value of \"a\" is less than or equal to 10.")
}

It should come to no surprise that Swift also defines an else clause. The code in the else clause is executed if the condition is equal to false. It's also possible to chain if statements as shown in the next example.

let a = 10

if a > 10 {
    println("The value of \"a\" is greater than 10.")
} else if a > 5 {
    println("The value of \"a\" is greater than 5.")
} else {
    println("The value of \"a\" is less than or equal to 5.")
}

There is one important note to make, that is, the condition of an if statement needs to return true or false. This isn't true for if statements in Objective-C. Take a look at the following if statement in Objective-C.

NSArray *array = @[];

if (array.count) {
    NSLog(@"The array contains one or more items.");
} else {
    NSLog(@"The array is empty.");
}

If we were to port the above code snippet to Swift, we would run into an error. The error isn't very informative, but Swift does tell us that we need to ensure the result of the condition evaluates to true or false.

The correct way to translate the above Objective-C snippet to Swift is by making sure the condition of the if statement evaluates to true or false, as in the following snippet.

let array = [String]()

if array.count > 0 {
    println("The array contains one or more items.")
} else {
    println("The array is empty.")
}

switch

Swift's switch statement is more powerful than its Objective-C equivalent. It's also safer as you'll learn in a moment. While there are some differences, switch statements in Swift adhere to the same concept as those in other programming languages, a value is passed to the switch statement and it is compared against possible matching patterns.

That's right, patterns. Like I said, a switch statement in Swift has a few tricks up its sleeve. We'll take a look at those tricks in a moment. Let's talk about safety first.

Exhaustive

A switch statement in Swift needs to be exhaustive, meaning that every possible value of the type that's handed to the switch statement needs to be handled by the switch statement. As in Objective-C, this is easily solved by adding a default case like in the following example.

let a = 10

switch a {
    case 0:
        println("a is equal to 0")
    case 1:
        println("a is equal to 1")
    default:
        println("a has another value")
}

Fallthrough

An important difference with Objective-C's implementation of switch statements is the lack of implicit fallthrough. The following example doesn't work in Swift for a few reasons.

let a = 10

switch a {
    case 0:
    case 1:
        println("a is equal to 1")
    default:
        println("a has another value")
}

The first case in which a is compared against 0 doesn't implicitly fall through to the second case in which a is compared against 1. If you add the above example to your playground, you'll notice that Swift throws an error at you. The error says that every case needs to include at least one executable statement.

Notice that the cases of the switch statement don't include break statements to break out of the switch statement. This isn't required in Swift since implicit fallthrough doesn't exist in Swift. This will eliminate a range of common bugs caused by unintentional fallthrough.

Patterns

The power of a switch statement in Swift lies in pattern matching. Take a look at the following example in which I've used ranges to compare the considered value against.

let a = 10

switch a {
    case 0..<5:
        println("The value of a lies between 0 and 4.")
    case 5...10:
        println("The value of a lies between 5 and 10.")
    default:
        println("The value of a is greater than 10.")
}

The ..< operator or half-open range operator defines a range from the first value to the second value, excluding the second value. The ... operator or closed range operator defines a range from the first value to the second value, including the second value. These operators are very useful in a wide range of situations.

You can also compare the considered value of a switch statement to tuples. Take a look at the following example to see how this works.

let latlng = (34.15, -78.03)

switch latlng {
case (0, 0):
    println("We're at the center of the planet.")
case (0...90, _):
    println("We're in the Northern hemisphere.")
case (-90...0, _):
    println("We're in the Southern hemisphere.")
default:
    println("The coordinate is invalid.")
}

As you can see in the above example, it is possible that the value matches more than one case. When this happens, the first matching case is chosen. The above example also illustrates the use of the underscore. As we saw in the previous article, we can use an underscore, _, to tell Swift which values we're not interested in.

Value Binding

Value binding is also possible with switch statements as the following example demonstrates. The second value of the tuple is temporarily bound to the constant description for use in the first and second case.

var response = (200, "OK")

switch response {
case (200..<400, let description):
    println("The request was successful with description \(description).")
case (400..<500, let description):
    println("The request was unsuccessful with description \(description).")
default:
    println("The request was unsuccessful with no description.")
}

for

The for loop is the first loop construct we'll take a look at. It behaves very similar to for loops in other languages. There are two flavors, the for loop and the for-in loop.

for

Swift's for loop is almost identical to a for loop in Objective-C as the following example illustrates. The for loop executes a number of statements until a predefined condition is met.

for var i = 0; i < 10; i++ {
    println("i is equal to \(i).")
}

As with if statements, there's no need to use parentheses to enclose the loop's initialization, condition, and increment definitions. The loop's statements, however, do need to be enclosed by curly braces.

for-in

The for-in loop is ideal for looping over the contents of a range or collection. In the following example, we loop over the elements of an array.

let numbers = [1, 2, 3, 5, 8]

for number in numbers {
    println("number is equal to \(number)")
}

We can also use for-in loops to loop over the key-value pairs of a dictionary. In the following example, we declare a dictionary and print its contents to the console. As we saw earlier in this series, the sequence of the key-value pairs is undefined since a dictionary is an unordered set of key-value pairs.

var bids = ["Tom": 100, "Bart": 150, "Susan": 120]

for (name, bid) in bids {
    println("\(name)'s bid is $\(bid).")
}

Each key-value pair of the dictionary is available in the for-in loop as a tuple of named constants. The for-in loop is also a great in combination with ranges. I'm sure you agree that the below snippet is easy to read and understand thanks to the use of a closed range.

for i in 1...10 {
    println("i is equal to \(i)")
}

while

The while loop also comes in two variations, while and do-while. The main difference is that the set of statements of a do-while loop is always executed at least once, because the condition of the do-while is evaluated at the end of each iteration. The following example illustrates this difference.

var c = 5
var d = 5

while c < d {
    println("c is smaller than d")
}

do {
    println("c is smaller than d")
} while c < d

The println statement of the while loop is never executed while that of the do-while loop is executed once.

In many cases, for loops can be rewritten as while loops and it's often up to the developer to determine which type of loop to use in a particular situation. The following for and while loops result in the same output.

for var i = 0; i < 10; i++ {
    println(i)
}

var i = 0

while i < 10 {
    println(i)
    i++
}

Conclusion

There's much more to control flow in Swift than what we've covered in this article, but you now have a basic understanding to continue your journey into Swift. I hope this tutorial has shown you that Swift's control flow implementation is very similar to that of other programming languages, with a twist.

In the rest of this series, we'll make more use of Swift's control flow constructs and you'll gradually get a better understanding of the subtle differences with languages like Objective-C. In the next installment of this series, we start exploring functions.

2015-01-02T18:45:04.000Z2015-01-02T18:45:04.000ZBart Jacobs

Quick Tip: Leveraging the Power of Git Stash

$
0
0

Imagine that you're working on a feature in a Git-controlled software project. You're right in the middle of making some changes when you get a request to fix a critical bug. To start resolving the issue, you need a new branch and a clean working directory. When it comes to basic Git commands, you have two options:

  • Run git reset --hard to remove your uncommitted changes.
  • Record your incomplete work as a new commit.

The former option loses all of your work while the latter results in a partial commit that isn’t meaningful. Neither of these scenarios is all that desirable.

This is where the git stash command comes into play. Like git reset --hard, it gives you a clean working directory, but it also records your incomplete changes internally. After fixing the critical bug, you can re-apply these changes and pick up where you left off. You can think of git stash as a "pause button" for your in-progress work.

Prerequisites

This tutorial assumes that you have installed Git and that you're familiar with its basic workflow. You should be comfortable staging changes, creating commits, and working with branches. You'll also need a Git repository to experiment on.

1. Stashing Changes

Before you can run git stash, you need to have some uncommitted changes in your Git repository. For example, if you edited a file called foo.py, your git status output would look like this:

On branch master
Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git checkout -- <file>..." to discard changes in working directory)

    modified:   foo.py

To stash these changes, simply execute git stash without any arguments.

git stash

This will take both your staged and unstaged changes, record them internally, then clear the working directory. This gives you the opportunity to switch to a new branch and develop other features without worrying about your partial commit messing anything up.

2. Re-Applying Stashed Changes

When you're ready to come back to your incomplete work, run the following command to re-apply the stashed changes:

git stash pop

The most recently stashed changeset will re-appear in your working directory and you can continue exactly where you left off. That's all there is to it.

3. Resolving Conflicts

Much like the git merge command, git stash pop can result in conflicts if the same sections of source code have changed since you executed git stash. When this happens, you'll see the following message after running git stash pop:

Auto-merging foo.py
CONFLICT (content): Merge conflict in foo.py

You'll also find the affected file listed under the Unmerged paths section in the git status output, as well as the affected lines in the source file.

<<<<<<< Updated upstream
print("Recently committed changes");
=======
print("Incomplete work");>>>>>>> Stashed changes

You'll need to manually resolve the conflict in the source file, but you usually don't want to commit it immediately like you would after a git merge conflict. Most of the time, you'll continue working on your unfinished feature until you have prepared a meaningful commit. Then, you can simply add it to the index and commit it as usual. In other words, you can treat git stash pop conflicts just like any other uncommitted changes.

4. The Stash Stack

For most scenarios, the above commands are all you need when it comes to a "pause button". But, understanding how stashed changes are represented opens the door for more advanced usage.

So far, we've only been talking about stashing a single changeset. However, each time you run git stash, uncommitted changes are stored on a stack. This means that you can stash multiple changesets at the same time.

This is useful in the early stages of development when you're not sure which direction you want to take. Instead of losing your changes with git reset --hard, you can keep your work-in-progress snapshots on the stash stack in case you want to re-apply one of them later.

You can inspect the stash stack with the list parameter.

git stash list

If you had previously executed git stash three times, this would output something like the following:

stash@{0}: WIP on new-feature: 5cedccc Try something crazy
stash@{1}: WIP on new-feature: 9f44b34 Take a different direction
stash@{2}: WIP on new-feature: 5acd291 Begin new feature

The git stash pop command always re-applies the most recent snapshot, the one at the top of the stash stack. But, it's also possible to pick and choose which stashed snapshot you want to re-apply with the apply command. For example, if you wanted to re-apply the second set of changes, you would use the following command:

git stash apply stash@{1}

Just like git stash pop, the changes will re-appear in your working directory and you can continue working on the incomplete feature. Note that this will not automatically remove the snapshot from the stash stack. Instead, you'll need to manually delete it with the drop command.

git stash drop stash@{1}

Again, working with the stash stack is more of an edge case for most Git users. The git stash and git stash pop commands should suffice for most of your needs, although git stash list can also prove useful if you forgot where your last stash operation took place.

Conclusion

Committing meaningful snapshots is at the heart of any Git workflow. Purposeful, encapsulated commits make it much easier to navigate your project history, figure out where bugs were introduced, and revert changes.

While not exactly an everyday command, git stash can be a very convenient tool for creating meaningful commits. It allows you to store incomplete work while avoiding the need to commit partial snapshots to your permanent project history. Keep this in mind the next time you wish you could pause whatever you were working on and come back to it later.

2015-01-05T16:45:38.000Z2015-01-05T16:45:38.000ZRyan Hodson
Viewing all 1836 articles
Browse latest View live