Working with CGPoint, CGSize, and CGRect structures isn't difficult if you're used to a language that supports the dot syntax. However, programmatically positioning views or writing drawing code is verbose and can become difficult to read.
In this tutorial, I'd like to clear out a few misconceptions about frames and bounds, and introduce you to CGGeometry, a collection of structures, constants, and functions that make working with CGPoint, CGSize, and CGRect that much easier.
1. Data Types
If you're new to iOS or OS X development, you may be wondering what CGPoint, CGSize, and CGRect structures are. The CGGeometry Reference defines a number of geometric primitives or structures and the ones we are interested in are CGPoint, CGSize, and CGRect.
As most of you probably know, CGPoint is a C structure that defines a point in a coordinate system. The origin of this coordinate system is at the top left on iOS and at the bottom left on OS X. In other words, the orientation of its vertical axis differs on iOS and OS X.
CGSize is another simple C structure that defines a width and a height value, and CGRect has an origin field, a CGPoint, and a size field, a CGSize. Together the origin and size fields define the position and size of a rectangle.
The CGGeometry Reference also defines other types, such as CGFloat and CGVector. CGFloat is nothing more than a typedef for float or double, depending on the architecture the application runs on, 32-bit or 64-bit.
2. Frames and Bounds
The first thing I want to clarify is the difference between a view's frame and its bounds, because this is something that trips up a lot of beginning iOS developers. The difference isn't difficult though.
On iOS and OS X, an application has multiple coordinate systems. On iOS, for example, the application's window is positioned in the screen's coordinate system and every subview of the window is positioned in the window's coordinate system. In other words, the subviews of a view are always positioned in the view's coordinate system.
Frame
As the documentation clarifies, the frame of a view is a structure, a CGRect, that defines the size of the view and its position in the view's superview, the superview's coordinate system. Take a look at the following diagram for clarification.
Bounds
The bounds property of a view defines the size of the view and its position in the view's own coordinate system. This means that in most cases the origin of the bounds of a view are set to {0,0} as shown in the following diagram. The view's bounds is important for drawing the view.
When the frame property of a view is modified, the view's center and/or bounds are also modified.
3. CGGeometry Reference
Convenient Getters
As I mentioned earlier, the CGGeometry Reference is a collection of structures, constants, and functions that make it easier to work with coordinates and rectangles. You may have run into code snippets similar to this:
CGPoint point = CGPointMake(self.view.frame.origin.x + self.view.frame.size.width, self.view.frame.origin.y + self.view.frame.size.height);
Not only is this snippet hard to read, it's also quite verbose. We can rewrite this code snippet using two convenient functions defined in the CGGeometry Reference.
CGRect frame = self.view.frame;
CGPoint point = CGPointMake(CGRectGetMaxX(frame), CGRectGetMaxY(frame));
To simplify the above code snippet, we store the view's frame in a variable named frame and use CGRectGetMaxX and CGRectGetMaxY. The names of the functions are self-explanatory.
The CGGeometry Reference defines functions to return the smallest and largest values for the x- and y-coordinates of a rectangle as well as the x- and y-coordinates that lie at the rectangle's center. Two other convenient getter functions are CGRectGetWidth and CGRectGetHeight.
Creating Structures
When it comes to creating CGPoint, CGSize, and CGRect structures, most of us use CGPointMake and its cousins. These functions are also defined in the CGGeometry Reference. Even though their implementation is surprisingly easy, they are incredibly useful and make you write less code. For example, this is how CGRectMake is actually implemented:
The functions we've covered so far are pretty well known among iOS developers and help us write less code that is more readable. However, the CGGeometry Reference also defines a number of other functions that are less known. For example, the CGGeometry Reference defines half a dozen functions for modifying CGRect structures. Let's take a look at a few of these functions.
CGRectUnion
CGRectUnion accepts two CGRect structures and returns the smallest possible rectangle that contains both rectangles. This may sound trivial and I'm sure you can easily accomplish the same task with a few lines of code, but CGGeometry is all about providing you with a few dozen functions that make your code cleaner and more readable.
If you add the following code snippet to a view controller's viewDidLoad method, you should get the following result in the iOS Simulator. The gray rectangle is the result of using CGRectUnion.
Another useful function is CGRectDivide, which lets you divide a given rectangle into two rectangles. Take a look at the following code snippet and screenshot to see how it's used.
If you were to calculate the red and orange rectangle without using CGRectDivide, you'd end up with a few dozen lines of code. Give it a try if you don't believe me.
Comparison and Containment
Comparing geometric structure and checking for membership is very easy with the following six functions:
CGPointEqualToPoint
CGSizeEqualToSize
CGRectEqualToRect
CGRectIntersectsRect
CGRectContainsPoint
CGRectContainsRect
The CGGeometry Reference has a few other gems, such as CGPointCreateDictionaryRepresentation for converting a CGPoint structure to a CFDictionaryRef, and CGRectIsEmpty to check if a rectangle's width and height are equal to 0.0. Read the documentation of the CGGeometry Reference to find out more.
4. Bonus: Logging
Logging structures to Xcode's console is cumbersome without a few helper functions. Luckily, the UIKit framework defines a handful of functions that make this very easy to do. I use them all the time. Take a look at the following code snippet to see how they work. It's no rocket science.
There are also convenience functions for logging affine transforms (NSStringFromCGAffineTransform), edge insets structs (NSStringFromUIEdgeInsets), and offset structs (NSStringFromUIOffset).
Conclusion
The iOS SDK contains a lot of gems many developers don't know about. I hope I've convinced you of the usefulness of the CGGeometry Reference. Once you start using its collection of functions, you'll start wondering how you used to manage without it.
In the first article of this series, we learned about the Core Data stack, the heart of a Core Data application. We explored the managed object context, the persistent store coordinator, and the managed object model.
This article focuses on the data model of a Core Data application. We zoom in on Xcode's data model editor and we take a look at entities, attributes, and relationships.
1. Data Model Editor
Start by downloading the project from the previous tutorial or clone the repository from GitHub. Open the project in Xcode and, in the Project Navigator, search for Core_Data.xcdatamodeld. Xcode automatically shows the data model editor when the project's data model is selected.
2. Entities
Before we explore the editor's user interface, we need to create an entity to work with. At the bottom of the data model editor, click the Add Entity button. This will add an entity with name Entity. It will show up in the Entities section on the left of the data model editor. Change the entity's name to Person by double-clicking it in the Entities section.
"What is an entity?" you may be wondering. To bring back the database analogy, an entity is comparable to a table in a database. When you select the Person entity, you see that an entity can have attributes, relationships, and fetched properties. Don't worry about fetched properties for now, they're a more advanced feature of the framework.
3. Attributes
Give the Person entity an attribute by clicking the plus button at the bottom of the Attributes table. Double-click the attribute's name and set it to first. From the Type drop-down menu, select String. If we compare this to a table in a database, the Person table now has a column first of type String.
Even though I don't want to confuse you by comparing entities with tables of a database, it makes it easier to understand what entities and attributes are. In fact, if you use a SQLite database as the backing store of your application, Core Data will create a table for you to store the data of the Person entity. However, this is something we don't have to and should not need to worry about. Remember that Core Data is not a database.
The same goes for relationships. How Core Data keeps track of relationships is something we don't need to worry about. In fact, Core Data makes sure relationships are only loaded when the application needs them. This is something we'll revisit later in this series.
Add two more attributes to the Person entity, last of type String and age of type Integer16. The type you choose for numbers is not important at this point. It tells Core Data how it should structure the database and optimize for performance.
Attribute Options
The attribute of an entity can be configured through the Data Model Inspector. Select the first attribute of the Person entity and open the inspector on the right. The Data Model Inspector lets you configure the selected attribute. At this point, we're only interested in a few settings, Optional, Attribute Type, and Default Value.
Optional
Marking an attribute as optional means that the attribute can be empty or left blank for a record. In our example, however, we want to make sure every Person record has a first name. With the first attribute selected, uncheck Optional to mark it as required. New attributes are optional be default.
Marking an attribute as required has consequences though. If we save a Person record without a valid first name, Core Data will throw an error. This means that we need to make sure that the record's first attribute is set before saving it.
Attribute Type
The attribute type is important for several reasons. It tells Core Data in what format it should save the attribute and it will also return the attribute's data to us in the specified format. Each attribute type has a different set of configuration options. Change the attribute type of the first attribute to Date to see the configuration options for an attribute of type Date.
Default Value
Several attribute types, such as String and Date, have a Default Value field you can set. This is convenient, for example, if an attribute is required and you want to ensure the attribute for a record has a valid value when it's inserted in the database.
Note that the default value is only used when a new record is created. If an existing Person record, for example, is updated by setting the first attribute to nil, then Core Data won't populate the first attribute with the default value. Instead Core data would throw an error, because we marked the first attribute as required.
4. Relationships
Core Data really shines when you start working with relationships between entities. Let's see how this works by adding a second entity named Address. The Address entity has four attributes of type String, street, number, city, and country.
Relationship between entities have a number of defining characteristics, the name, the destination, the cardinality of the relationship, the inverse relationship, and the relationship's delete rule.
Let's explore relationships in more detail by creating a relationship between the Person and Address entities.
Name, Destination, and Optionality
Create a relationship by selecting the Person entity and clicking the plus button at the bottom of the Relationships table. Name the relationship address and set the Destination to the Address entity. This indicates that each person record can be associated with an address record.
As with attributes, relationships are optional by default. This means that no validation error will be thrown if a person record has no relationship with an address record. Let's change this by unchecking the Optional checkbox in the Data Model Inspector on the right.
Inverse Relationship
At the moment, the person can have a relationship with an address record. However, if the person has an address record associated with it, the address record does not know about the person record, because the relationship is uni-directional at the moment—from Person to Address. Most relationships in Core Data, however, are bi-directional, both entities know about the relationship.
Let's create the inverse relationship from the Address entity to the Person entity by selecting the Address entity and creating a relationship named person with the Person entity as its destination.
Even though we created the inverse relationship between Address and Person, Xcode gives us a few warnings telling us Person.address should have an inverse and Address.person should have an inverse. Did we do something wrong?
Core Data isn't clever enough to know which relationship is the inverse relationship of which relationship. This is easy to fix though. Select the Person entity and set the Inverse of the address relationship to person, the person relationship. If you now select the Address entity, you'll see that the inverse of the address relationship has already been set to the person relationship for you.
Data Model Graph
When the data model gains in complexity, relationships can become confusing and unclear. Xcode has your back covered though. The data model editor has two styles, table and graph. In the bottom right of the editor, you'll see a toggle that lets you switch between the two modes. Click the left button to switch to the graph style.
The graph style shows you the object graph we've created so far. It shows us the entities we've created, their attributes, and their relationships. One of the most useful feature, however, is the visual representation of the relationships between the data model's entities. A line with an arrow at each end connects Person and Address, signifying their bi-directional relationship.
To-Many Relationships
The relationships we've created so far are to-one relationships, a person can have one address and vice versa. However, it's perfectly possible that several people live at the same address. How would we include this extra information in our data model?
A relationship's cardinality specifies if it's a to-one or to-many relationship. Let's change the person relationship of the Address entity to make it a to-many relationship. Select the person relationship of the Address entity, change its name to persons to reflect the to-many relationship, and set the relationship Type to To Many in the inspector on the right.
The name of the relationship isn't important, but it shows that its a to-many relationship. Did you notice that the data model graph updated as well? The relationship endpoint to the Person entity has two arrows to signify the to-many nature of the relationship.
Many-To-Many Relationships
Wait a minute. Isn't it possible that a person is associated with more than one address. A person can have a work address and a home address. Right? Core Data solves this by creating a many-to-many relationship. Select the address relationship of the Person entity, change its name to addresses, and set the relationship Type to To Many. The data model editor shows the updated relationship as a line with two arrows at each end.
Reflexive Relationships
The way Core Data implements relationships is very flexible. The destination entity of a relationship can even be the same as the source entity. This is known as a reflexive relationship. It's also possible to have multiple relationships of the same type with different names. A person, for example, can have a mother and a father. Both relationships are reflexive with the only difference being the name of the relationship.
Delete Rules
What happens when the record on one end of the relationship is deleted? If you were to think about Core Data as a database, then the answer would be obvious. Core Data, however, isn't a database.
Assume you have an Account entity with a to-many relationship to a User entity. In other words, an account can have many users and each user belongs to one account. What happens when you delete a user? What happens when you delete an account? In Core Data, each relationship has a delete rule that makes it clear what happens in these situations.
Delete rules make sure you don't have to worry about explicitly updating the backing store when a record is deleted. Core Data takes care of this to ensure the object graph remains in a consistent state.
Select the addresses relationship of the Person entity and open the inspector on the right. The Delete Rule menu has four options, No Action, Nullify, Cascade, and Deny.
No Action
If you select No Action, Core Data doesn't update or notify the source record of the relationship. This means that the source record of the relationship still thinks it has a relationship with the record that was deleted. Note that this is rarely what you want.
Nullify
This option sets the destination of the relationship to null when the destination record is deleted. This is the default delete rule of a relationship.
Cascade
If the relationship from Person to Address is set to Cascade, deleting a person record will also delete any address records that are associated with the person record. This is useful, for example, if a relationship is required and the record cannot or shouldn't exist without the relationship. A user, for example, shouldn't exist if it's not associated with an account.
Deny
In a sense, Deny is the inverse of Cascade. For example, if we have an Account entity that has a to-many relationship with a User entity with its delete rule set to Deny, an account record can only be deleted if it has no user records associated with it. This ensures that no user records exist without an account record. The result is similar to the Cascade delete rule, but the implementation differs.
Conclusion
In this tutorial, we've taken a closer look at the data model used by a Core Data application. You should now be familiar with entities, attributes, and relationships, and you should be able to create them using Xcode's data model editor.
Core Data is very good at managing relationships and Xcode's data model editor makes it easy to create and manage relationships between entities. Relationships between entities are powerful and easily configurable. Delete rules ensure that the object graph Core Data manages remains healthy and in a consistent state.
In the next article, we get our hands dirty and start working with Core Data. You'll learn how to create, read, update, and delete records, and become familiar with NSManagedObject and NSFetchRequest.
Google Play Game Services provide the opportunity to add social features to your games through users' Google+ accounts. In this tutorial, we will demonstrate how you can add leaderboards to an Android app, submitting user scores, and presenting the current leaderboard standings within the game.
Using leaderboards involves preparing your IDE, configuring the leaderboard in the Google Play Developer Console, and adding functionality to your app.
If you completed the recent tutorial on adding achievements to Android apps, you will be able to skip some of the steps in this one. The attached source code includes the same app we used for the achievements tutorial, with both achievements and leaderboards functionality added.
1. Prepare Your IDE
Step 1
To use Google Play Services, you need certain utilities installed in your development environment. In addition, since we are using Game Services, we will install the BaseGameUtils library, which reduces the amount of coding we need to implement features such as Google+ sign-in.
To get started, create a new app or use an existing one. If you followed the achievements tutorial, you can use the app you built for that tutorial. If you are creating your own game, decide what you want to use leaderboards for and when you plan on submitting a user score. Each leaderboard score will be a number. You can configure the leaderboard to regard lower or higher number values as better in terms of position in the leaderboard, but naturally this will depend on the purpose of your game.
The code in the download folder includes a simple game in which the user guesses a number. We will use the number of guesses required to get the correct answer as the leaderboard score. In this case, fewer guesses are better, so the leaderboard will present the lowest scores first. For simplicity, we will limit the number of guesses a user can take. This is a trivial example to demonstrate the leaderboard concept and functionality. Your own games will likely involve more complexity.
Step 2
Let's get Eclipse ready for developing with Google Play Game Services. Open the Android SDK Manager and scroll to the Extras folder. Expand the folder and select Google Play Services plus the Google Repository. Install the Google APIs Platform from one of the recent Android versions as well if you want to test on the emulator. Install the chosen packages.
Step 3
Eclipse will also need to reference some additional resources in the workspace. On your computer, navigate to the location of the Google Play Services Library, which should be in the Android SDK folder, at extras/google/google_play_services/libproject/google-play-services_lib/. Copy and paste the library somewhere else on your computer.
We now need a reference to this copy in Eclipse. Choose Import >Android >Import Existing Android Code into Workspace from the File menu. Select the location of the copy you made. The imported library should now show up as a new project in Eclipse. Right-click it and choose Properties. In the Android section, choose a Google APIs build target and check the Is Library checkbox.
Step 4
Importing the BaseGameUtils resource is slightly different. The library is hosted on GitHub. You can find it in the Downloads section, under Sample Games. Download the library and save it to your computer.
As you did for the Google Play Services library, choose Import > Android > Import Existing Android Code into Workspace from the File menu to bring the BaseGameUtils library into Eclipse. Right-click to navigate to the new project properties and make sure the project is marked as a library by checking Is Library.
Step 5
We can now make the app refer to these two resources within the workspace. Right-click your app in the Package Explorer and choose Properties. Navigate to the Android section and select Add in the Library section. Choose both the Google Play Services library and BaseGameUtils, and add them to your app.
2. Prepare Your Game in the Developer Console
Step 1
Before you can create a leaderboard, the app needs to be listed in the Google Play Developer Console. Log in and click the Game Services button to the left. If you already did this for your app in the achievements tutorial, you do not need to do it again. You can jump to section 3 on creating a leaderboard.
Click Set up Google Play game services.
Click to add a new game, select I don't use any Google APIs in my game yet, and choose a name and category for your game. Click Continueto go to the next step.
Add your game's title. You can add other details later.
Step 2
Let's now link the app so that we can refer to this Developer Console listing in the app itself. Click the Linked Apps entry in the list on the left and choose Android.
Enter your app info including the package name, making sure it's the same as the one you are using in your project.
Save and click Authorize your app now. For the moment, you can just add the app name, but you can enter more details later. Choose Installed Application in the Client ID area, with Android as the type and enter your package name. You now need to use the keytool utility to generate a signing certificate. You can use the following command in a terminal or command prompt in combination with the debug certificate:
The terminal or command prompt will write out the fingerprint for the certificate. Copy what you see after SHA1 and paste it into the Developer Console in the Signing Certificate Fingerprint text area.
Select Create Client and copy the ID for the application, which is listed next to the app name in the Developer Console. You will be adding the ID to your app along with the ID for the leaderboard we are about to create.
3. Create a Leaderboard
Step 1
Still in the Developer Console, let's now create a new leaderboard. Select the Leaderboards section in your app listing and click Add leaderboard.
Make sure you understand the concept of Leaderboards on Android—and in Google Play Game Services generally. You can read an overview on the Google Play Game Services website. You can actually do a lot of different things with leaderboards, so consider what we do in this tutorial just a starting point.
Enter the details for your new leaderboard. For our sample code, we use the name Least Guesses and select Smaller is Better in the Ordering section.
Add an icon if you like. If you don't a standard image will be used. Save your new leaderboard and copy its ID.
Step 2
In the Testing section for your app in the Developer Console, you can add accounts that will be granted access to test the game. By default, you will see your own Google account email listed there, so you should be able to use it for testing your app.
4. Prepare Your Game for Accessing Games Services
Step 1
It's time to get the app ready for leaderboard access in Eclipse. If you completed the achievements tutorial you can skip some of this section. Let's first add the IDs for the app and the leaderboard. Open or create a res/values/ids.xml resource file. Use the following syntax to enter the IDs you copied for the app and the new leaderboard when you created them in the Developer Console:
The app is now set up to link to the listings we added in the Developer Console.
Step 2
When you utilize Google Services in your Android apps, you need your users to sign into their Google accounts. You can take a number of approaches to implement this, but we are going to automate parts of this process by using the BaseGameActivity class together with standard buttons for signing in and out. Additionally, when the activity starts, the app will attempt to log the user in straight away.
Open your app's main layout file and add buttons for sign in/out:
Now add the following standard methods to an onClick method in the class:
@Override
public void onClick(View view) {
if (view.getId() == R.id.sign_in_button) {
beginUserInitiatedSignIn();
}
else if (view.getId() == R.id.sign_out_button) {
signOut();
findViewById(R.id.sign_in_button).setVisibility(View.VISIBLE);
findViewById(R.id.sign_out_button).setVisibility(View.GONE);
}
}
The methods we call here are provided by the BaseGameActivity class our Activity class is inheriting from, so we don't need to handle the details manually. Finally, we add a couple of standard callbacks:
public void onSignInSucceeded() {
findViewById(R.id.sign_in_button).setVisibility(View.GONE);
findViewById(R.id.sign_out_button).setVisibility(View.VISIBLE);
}
@Override
public void onSignInFailed() {
findViewById(R.id.sign_in_button).setVisibility(View.VISIBLE);
findViewById(R.id.sign_out_button).setVisibility(View.GONE);
}
When we call on the leaderboard functionality, we will first check that the app has a connection to Google Services. You could alternatively add code to these methods to manage your app's awareness of whether or not Play Services can be called on.
5. Implement Your Leaderboard
Step 1
Now we can let the app use the leaderboard. The code in the sample app uses the following layout. I won't go into detail explaining the layout as your own apps will have a different layout.
We add buttons to access both achievements and leaderboards for the app. If you haven't completed the achievements tutorial, then you can remove the achievements button.
Back in your application's Activity class, we will be using these instance variables:
private Button button0, button1, button2, button3, button4, button5,
button6, button7, button8, button9, buttonAgain;
private int number;
private Random rand;
private TextView info;
private int numGuesses=0;
If you completed the achievements tutorial you may notice an additional variable, numGuesses, to keep track of the number of user guesses each time they play the game.
You will need the following additional code in the onCreate method of the Activity class. If you're not using the achievements button, then remove the line that references it.
We also need the following method we specified as onClick attribute for the number buttons in the layout. The player taps one of these to make a guess:
public void btnPressed(View v){
int btn = Integer.parseInt(v.getTag().toString());
if(btn<0){
//again btn
numGuesses=0;
number=rand.nextInt(10);
enableNumbers();
info.setText("Set the number!");
}
else{
//number button
numGuesses++;
if(btn==number){
info.setText("Yes! It was "+number);
if(getApiClient().isConnected()){
Games.Achievements.unlock(getApiClient(),
getString(R.string.correct_guess_achievement));
Games.Leaderboards.submitScore(getApiClient(),
getString(R.string.number_guesses_leaderboard),
numGuesses);
}
disableNumbers();
}
else if(numGuesses==5){
info.setText("No! It was "+number);
disableNumbers();
}
else
info.setText("Try again!");
}
}
Take a moment to look over the code. Even if you completed the app in the achievements tutorial, there are some changes to the logic in addition to the extra leaderboard code. If the player taps the Again button, we reset the numGuesses variable to 0. If the user taps a number button, we increment numGuesses. If you aren't using achievements, you can remove any code that references achievements.
We submit the score to the leaderboard when the user guessed correctly. The user can make up to five guesses.
The key line here is submitScore. We pass the number of guesses the player took to get the correct number. If the number of guesses is lower than any existing entry for the user in the leaderboard, their entry will be replaced with this new value. Notice that we use the string resource value we defined for the leaderboard.
Step 2
Before we finish, let's allow the user to view the game leaderboard by tapping the Leaderboard button we added. We used the following code in onClick for the achievements:
else if (view.getId() == R.id.show_achievements){
startActivityForResult(Games.Achievements.getAchievementsIntent(
getApiClient()), 1);
}
This will let the user see the current standings within the leaderboard. The integer parameter is arbitrary.
When you run the app, it will attempt to log the user in, checking for permissions, and confirming login if successful:
The user is free to choose to sign out and back in whenever they like, but if they leave the app, it will attempt to automatically log them back in when they open it again. When the user guesses correctly, their score will be submitted to the leaderboard. Pressing the Leaderboard button will present the current standings:
From here, the user can access social features of Google Services via their Google account. You can set your apps up to use public and social leaderboards. Social leaderboards present listings of people in the user's circles, which can be managed for the game itself. For a public leaderboard, the user must have opted to share their scores publicly.
Conclusion
In this tutorial, we have implemented basic leaderboard functionality with Google Play Game Services. Note that you can do much more with leaderboards in your apps. For example, you can request leaderboard data for particular time-scales such as daily, weekly, and all-time. If a leaderboard contains a lot of scores, it is possible for your app to only fetch the top scores or the scores closest to the current player. Try experimenting with some of these enhancements in your own apps.
Enumerating collections in Objective-C is often verbose and clunky. If you're used to Ruby or worked with Underscore or Lo-Dash in JavaScript, then you know there're more elegant solutions. That is exactly what the creators of YOLOKit thought when they created this nifty library. YOLOKit's tagline is Enumerate Foundation delightfully and they mean it.
1. Installation
Adding YOLOKit to an Xcode project is very easy with CocoaPods. Include the pod in your project's Podfile, run pod update from the command line, and import YOLO.h wherever you want to use YOLOKit.
If you're not using CocoaPods, then download the library from GitHub, add the relevant files to your project, and import YOLOKit's header.
2. Using YOLOKit
YOLOKit has a lot to offer, but in this quick tip I'll only focus on a few of the methods YOLOKit has in its repertoire.
Minimum and Maximum
Let's start simple with extracting the minimum and maximum value of an array. Take a look at the following code snippet to see how it works.
NSArray *numbers = @[ @(1), @(2), @(45), @(-12), @(3.14), @(384) ];
// Minimum
id min = numbers.min(^(NSNumber *n) {
return n.intValue;
});
id max = numbers.max(^(NSNumber *n) {
return n.intValue;
});
NSLog(@"\nMIN %@\nMAX %@", min, max);
The above code snippet results in the following output.
MIN -12
MAX 384
The syntax may seem odd and you may be wondering why min and max take a block, but this actually adds more power to theses methods. You can do whatever you like in the block to determine what the minimum and maximum value of the array is. The following example should clarify this.
This code snippet results in the following output.
SHORTEST a
LONGEST everyone
YOLOKit is flexible and doesn't complain about the type of the block arguments. However, to satisfy the compiler, we cast the return value of the block to NSInteger, because that's what it expects.
Filtering Arrays
Selecting & Rejecting
There are a number of methods to filter arrays, including select and reject. Let's see how we can filter the array of numbers and words we created earlier.
You have to admit that this is very nice to look at. It's concise and very legible. The arrays in the above examples are simple, but note that you can use arrays that are much more complex than this. The following example illustrates this.
YOLOKit also defines first and last, but they don't do what you expect them to do. In other words, they're not equivalent to NSArray's firstObject and lastObject methods. With first and last you can create a subarray from the original array. Take a look at the following example.
One of the benefits of using NSSet is that it doesn't contain duplicate objects. However, uniquing an array of objects is trivial with YOLOKit. Let's add a few additional numbers with YOLOKit's concat method and then unique the array with uniq.
Have you noticed I also sorted the array by chaining uniq and sort? The goal isn't to turn Objective-C code into Ruby or JavaScript, but I'm sure you agree that this code snippet is concise, and very easy to read and understand.
The above code snippet results in the following output.
REVERSED
(
384,
"3.14",
"-12",
45,
2,
1
)
SHUFFLED
(
for,
is,
everyone,
example,
a,
this
)
Other Methods
There are a lot of other methods to work with arrays, such as rotate, sample, without, set, transpose, etc. I encourage you to browse YOLOKit on GitHub to find out more about them.
There are also methods that can be used with NSDictionary, NSNumber, and NSString. The following code snippet shows you how to convert a string into an array of words.
id wordsInString = @"You only live once. Right?".split(@" ");
NSLog(@"STRING %@", wordsInString);
STRING (
You,
only,
live,
"once.",
"Right?"
)
3. Considerations
Code Completion
Because of YOLOKit's odd syntax, Xcode won't be of much help when it comes to code completion. It will show you a list of suggestions for YOLOKit's methods, but that's about it. If you want to use YOLOKit, you'll have learn the syntax.
Performance
YOLOKit isn't optimized for performance as this GitHub issue shows. However, it does make your code prettier and more readable. Using a for loop to loop over an array will be faster and more performant than YOLOKit's methods and it's important that you keep this in mind.
Conclusion
Do I recommend YOLOKit? Yes and no. The above considerations shouldn't keep you from using YOLOKit, but make sure that you don't use YOLOKit if performance is important, because there are better options available—like the good ol' for loop.
The long of the short is that you should only use YOLOKit if you feel that it adds value to your project. Also consider that your colleagues need to learn and appreciate YOLOKit's syntax. I think YOLOKit is a great project that clearly shows how incredibly expressive Objective-C can be. For me, that's the most important lesson I take away from YOLOKit.
RubyMotion is a framework that lets you build iOS applications in Ruby. It gives you all of the benefits of the Ruby language, but because your code is compiled to machine code, you gain all of the raw performance of developing in Objective-C. RubyMotion lets you use the iOS SDK directly, which means you have access to all of the latest features of the platform. You can include Objective-C code into your project and RubyMotion even works with CocoaPods.
In this tutorial, you’ll build a painting application from scratch. I’ll show you how to incorporate Interface Builder into your workflow and how to properly test your application. If you don’t have any prior iOS or Ruby experience, I’d recommend you learn more about those first. The Tuts+ Ruby for Newbies and Learning iOS SDK Development from Scratch guides are a great place to start.
1. Project Setup
Before you can start coding, you need to have RubyMotion installed and set up. For details on how to do this, check out the Prerequisites section of the RubyMotion Getting Started guide.
Once you've done that, open up your terminal and create a new RubyMotion project by running:
motion create paint
cd paint
This creates a paint directory and several files:
.gitignore: This file tells Git which files to ignore. Because RubyMotion generates build files when it’s running, this file is useful for keeping your generated build files out of source control.
Gemfile: This file contains your application’s dependencies.
Rakefile: RubyMotion uses Rake to build and run your application. The Rakefile configures your application and loads its dependencies. You can see all of the tasks available to your application by running rake -T from the command line.
app/app_delegate.rb: The application delegate is the entry point to your application. When iOS finishes loading your application into memory, the application delegate is notified.
RubyMotion also generates a spec/main_spec.rb file. I’ll show you how to test your application a little later in this tutorial. For now, you can delete this file by running rm spec/main_spec.rb from the command line.
Install your application’s dependencies by running bundle install followed by bundle exec rake to start your application.
Woohoo! A black screen. You’ll make it more interesting in a minute.
2. First Change
Even though it’s nice to have a running app, a black screen is a little boring. Let’s add a little color.
Like the native iOS SDK, RubyMotion doesn’t force you to organize your files in any specific way. However, it’s useful to create a few folders in the app directory to keep your project organized. Run the following commands from the command line to create a directory for your models, views, and controllers.
Next, take a look inside the app/app_delegate.rb file:
class AppDelegate
def application(application, didFinishLaunchingWithOptions:launchOptions)
true
end
end
If you’re familiar with iOS development, you’ll notice this method belongs to the UIApplicationDelegate protocol, which provides several hooks into the application life cycle. Note that the AppDelegate class doesn’t declare that it implements the UIApplicationDelegate protocol. Ruby relies on duck typing as it doesn't support protocols. This means it doesn’t care whether your class says it implements a protocol, it only cares if it implements the correct methods.
The definition of the application:didFinishLaunchingWithOptions: method inside the AppDelegate class may look a little strange. In Objective-C, this method would be written like this:
Because Objective-C method names can be split into several parts, Ruby implements them in a unique way. The first part of application:didFinishLaunchingWithOptions: is what would be the method name in MRI. The rest of the method signature is written like keyword arguments. In RubyMotion, application:didFinishLaunchingWithOptions: is written like this:
def application(application, didFinishLaunchingWithOptions:launchOptions)
end
Let’s implement this method.
class AppDelegate
def application(application, didFinishLaunchingWithOptions:launchOptions)
@window = UIWindow.alloc.initWithFrame(UIScreen.mainScreen.bounds)
@window.makeKeyAndVisible
@window.rootViewController = UIViewController.alloc.initWithNibName(nil, bundle: nil)
true
end
end
The first two lines of the application:didFinishLaunchingWithOptions: method create a new window object and makes it the key window of the application. Why is @window an instance variable? RubyMotion will garbage collect the window unless we store it. The last line of the method sets the window’s root view controller to a new, empty view controller.
Run the application to make everything is still working.
Hmm. The application runs, but the screen is still black. How do you know your code is working? You can do a quick sanity check by adding the following to the bottom of the application:didFinishLaunchingWithOptions:, before true. Be sure to remove this before moving on.
No application is complete without a solid suite of tests. Testing allows you to be confident your code works and it lets you make changes without worrying about breaking existing code.
RubyMotion ships with a port of the Bacon testing library. If you’re familiar with Rspec, Bacon will feel very familiar.
To get started, mirror the app directory structure in the spec directory by running the following commands from the command line.
Next, create the AppDelegate's specification file at spec/app_delegate_spec.rb. By convention, source files are mirrors in the spec directory and have _spec appended to the end of their file name.
Start this class by defining a describe block that tells the reader what your file is testing.
describe AppDelegate do
end
Next, add a second describe block within the first to show that you want to test the application:didFinishLaunchingWithOptions: method.
describe AppDelegate do
describe "#application:didFinishLaunchingWithOptions:" do
end
end
Did you notice the # at the beginning of the method signature? By convention, instance methods begin with a hash and class methods begin with a period.
Next, add a spec using an it block.
describe AppDelegate do
describe "#application:didFinishLaunchingWithOptions:" do
it "creates the window" do
UIApplication.sharedApplication.windows.size.should == 1
end
end
end
One of the best things about Bacon—and other BDD test frameworks—is that the specs are very clear about what they’re testing. In this case, you're making sure the application:didFinishLaunchingWithOptions: method creates a window.
Your spec doesn't have to call the application:didFinishLaunchingWithOptions: method directly. It's called automatically when Bacon launches your application.
Run your application's specs by running bundle exec rake spec from the command line. You should see output like this:
This tells you that Bacon ran one test and didn’t find any errors. If one of your specs fails, you’ll see 1 failure and Bacon will print out a detailed description of the problem.
The above works, but you’ll be using UIApplication.sharedApplication for all of your specs. Wouldn’t it be nice if you could grab this object once and use it in all of the specs? You can with a before block.
describe AppDelegate do
describe "#application:didFinishLaunchingWithOptions:" do
before do
@application = UIApplication.sharedApplication
end
it "creates the window" do
@application.windows.size.should == 1
end
end
end
Now you can easily add the rest of the application's specs.
describe AppDelegate do
describe "#application:didFinishLaunchingWithOptions:" do
before do
@application = UIApplication.sharedApplication
end
it "creates the window" do
@application.windows.size.should == 1
end
it "makes the window key" do
@application.windows.first.isKeyWindow.should.be.true
end
it "sets the root view controller" do
@application.windows.first.rootViewController.should.be.instance_of UIViewController
end
end
end
Run these to make sure everything works before moving on.
4. Adding the User Interface
There are several ways to create the user interface using RubyMotion. My personal favorite is to use Interface Builder with the IB gem. Open up your Gemfile and add the IB gem.
source 'https://rubygems.org'
gem 'rake'
gem 'ib'
Run bundle install from the command line to install the gem. If you’re using Git, add ib.xcodeproj to your .gitignore file.
Interface Builder is a part of Xcode. Launch Interface Builder by running bundle exec rake ib:open. This creates an Xcode project tailored to your application. Create a new user interface files by selecting New > File... from Xcode's File menu and select Storyboard from the User Interface category on the left. Click Next twice to complete this step.
Save the storyboard in the resources directory as main.storyboard. Open the storyboard in Xcode and drag a new View Controller into it from the Object Library on the right. Set the Storyboard ID field of the controller to PaintingController.
Drag a label into the view controller’s view from the Object Library on the right and set its text to Hello.
Next, open up app/app_delegateand replace the last line of application:didFinishLaunchingWithOptions: with the following:
Next, run your application’s tests again with bundle exec rake spec to make sure they still pass. Notice how you didn’t have to change any of them? Good specs test the behavior of the code, not its implementation. This means you should be able to change how your code is implemented and your specs should still work. Run your application to test drive your new user interface.
5. Buttons
What you’ve built so far is great, but wouldn’t it be nice if your app actually did something? In this section, you’ll add the controls for switching the color of the paint brush. Create two new files, a controller and its spec, by running the following commands.
Implement the PaintingController's skeleton along with its spec.
class PaintingController < UIViewController
end
describe PaintingController do
tests PaintingController, :storyboard => 'main', :id => 'PaintingController'
end
RubyMotion handles controller specs in a special way. The tests PaintingController, :storyboard => 'main', :id => 'PaintingController' line of the spec file tells RubyMotion to use the controller with a storyboard ID of PaintingController in the main storyboard. You can use the controller variable to test it.
Next, you’ll need to add outlets to your controller. These allow you to connect objects to your controller in Interface Builder.
class PaintingController < UIViewController
extend IB
outlet :black_button
outlet :purple_button
outlet :green_button
outlet :blue_button
outlet :white_button
def select_color(sender)
end
end
extend IB adds several methods to your controller, including outlet. You’ve added five outlets, one for each button.
The images for the buttons are included in the source files of this tutorial. Download the images and copy them into the resources directory. You need to regenerate your Xcode project to allow Interface Builder to pick up the changes we've made. The easiest way to do this is by closing Xcode and running bundle exec rake ib:open, which will reopen the project.
Select the view controller and change its class to PaintingController.
Open spec/app_delegate_spec.rb and modify the last spec to check for the PaintingController class.
it "sets the root view controller" do
@application.windows.first.rootViewController.should.be.instance_of PaintingController
end
Add five buttons to the view controller's view by dragging Button objects onto the view from the Object Library on the right.
These buttons are a bit dull. Select the first button, change its type to Custom in the Attributes Inspector on the right and remove its title. Be sure the Default state is selected in the State Config drop-down menu and set the background image to button_black.png. Set the Tint property of the button to transparent.
Set the State Config drop-down menu to Selected and change the background image to button_black_selected.png.
In the Size Inspector, change the width and height of the button to 50.
Repeat this process for the other buttons.
The next step is to hook the buttons up to the view controller's outlets we declared earlier. Hold down the Control key on your keyboard and drag from the view controller to the first button. A menu will pop up when you release your mouse. Select black_button from the menu. Next, hold down the Control key and drag from the button to the view controller and choose the select_color method from the menu that pops up. Repeat these two steps for the other buttons.
Finally, select the first button and click on the Selected checkbox under Control in the Attributes Inspector.
Now's a good time to add a few helpful specs to spec/painting_controller_spec.rb.
describe PaintingController do
tests PaintingController, :storyboard => 'main', :id => 'PaintingController'
describe "#black_button" do
it "is connected in the storyboard" do
controller.black_button.should.not.be.nil
end
end
describe "#purple_button" do
it "is connected in the storyboard" do
controller.purple_button.should.not.be.nil
end
end
describe "#green_button" do
it "is connected in the storyboard" do
controller.green_button.should.not.be.nil
end
end
describe "#blue_button" do
it "is connected in the storyboard" do
controller.blue_button.should.not.be.nil
end
end
describe "#white_button" do
it "is connected in the storyboard" do
controller.white_button.should.not.be.nil
end
end
end
These specs ensure the outlets are properly connected in Interface Builder. As always, it’s a good idea to run them before proceeding to make sure they all pass.
Next, you’ll implement the select_color method in PaintingController. When this method is called, the button that was tapped is selected and the previously selected button is deselected.
def select_color(sender)
[ black_button, purple_button, green_button, blue_button, white_button ].each do |button|
button.selected = false
end
sender.selected = true
end
Add the specs to spec/controllers/painting_controller_spec.rb.
describe "#select_color" do
before do
controller.select_color(controller.green_button)
end
it "deselects the other colors" do
controller.black_button.state.should == UIControlStateNormal
controller.purple_button.state.should == UIControlStateNormal
controller.blue_button.state.should == UIControlStateNormal
controller.white_button.state.should == UIControlStateNormal
end
it "selects the color" do
controller.green_button.state.should == UIControlStateSelected
end
end
Run the application and make sure the button selection works. When you tap on a button, it should increase in size. While this is cool, what you really want is for a color to be selected when the button is tapped. This is easy to accomplish with a few additions.
Sugarcube is a set of iOS extensions for RubyMotion that make several tasks, like creating colors, simpler. Add gem 'sugarcube' to your Gemfile and run bundle install. Then, add require "sugarcube-color" to your Rakefile above Motion::Project::App.setup.
The gem makes it easy to create colors using their hex code. In the PaintingController class, add the following code snippet below the declaration of the outlets:
Next, refactor the array of buttons in select_color into a private helper method:
def select_color(sender)
buttons.each do |button|
button.selected = false
end
sender.selected = true
@color = COLORS[sender.tag]
end
private
def buttons
[ black_button, purple_button, green_button, blue_button, white_button ]
end
Finally, add a new method below select_color that returns the selected color.
def selected_color
COLORS[buttons.find_index { |button| button.state == UIControlStateSelected }]
end
This method grabs the index of the selected button and selects the color that corresponds to it. Of course, this method wouldn’t be complete without tests.
describe "#selected_color" do
before do
controller.select_color(controller.green_button)
end
it "returns the correct color" do
controller.selected_color.should == PaintingController::COLORS[2]
end
end
Run your application again to make sure everything works as expected.
Conclusion
You’ve covered a lot of ground in this tutorial. You’ve learned how to set up and run a RubyMotion application, you've worked with Interface Builder, and you've built a user interface.
In the second part of this tutorial, you’ll dive deeper into the Model-View-Controller pattern on iOS and your application’s organization. You’ll also add a painting view and write the code that allows the user to draw. Stay tuned.
WWDC is like Christmas for Cocoa developers, and this is certainly true for this year's edition due the scarcity of leaks and rumors leading up to the conference. Even though we're all curious to hear what Apple has in store for everyone loving Apple, the keynote is much more fun when you have no clue what's about to be announced, like this year.
If you've seen Tim Cook's keynote, then I'm sure you agree that Apple surpassed everyone's expectations. Let's take a few minutes to summarize what Apple has announced, what it means for developers, and what you can expect later this year.
Confident & Fierce
No matter what's been written about Apple in recent months, Apple is alive and kicking. It seems Apple has indeed doubled-down on secrecy, because some, if not most, of what was announced during Monday's keynote was a surprise, even for people familiar with Apple's product line and roadmap.
What surprised me during the keynote was the tone of the main speakers, Tim Cook and Craig Federighi. The company has regained the confidence that seemed to have left the company in 2011, when Steve Jobs passed away.
The company is proud of its products, its developer community, and it doesn't shy away from the occasional ridiculing of Android. The healthy relationship with Microsoft was also present throughout the keynote.
OS X Yosemite
From an iOS developer's perspective, the announcement of OS X Yosemite may not seem that important, but this isn't completely accurate. Even though Apple has repeatedly stated that iOS and OS X remain separate operating systems, it's clear the company is improving their integration with every release.
It's no coincidence that OS X's development cycle has changed from 18-24 months to 12 months, the same as that of iOS. While iOS inherited a lot from OS X during its first few years, it seems iOS is now returning the favor. With OS X Yosemite, the look and feel of OS X is more like that of iOS. Not only has OS X become flatter, like its little brother, the new Continuity feature is another step to a better integration of and communication between both operating systems.
Features like AirDrop, Handoff, and iCloud Drive make switching between iOS and OS X easier, almost frictionless. These features are part of the company's response to the request from consumers to make the integration between iOS and OS X better and less cumbersome.
But Apple didn't leave it at that. The company has taken it one step further by leveraging proximity sensing, which means that your Mac knows when your iOS device is nearby. This enables a few things, such as accepting incoming calls on your iPhone from your Mac. Your iPhone will also automatically set up a personal hotspot as soon as it knows one of your Macs is nearby. How cool is that?
iOS 8
The transition from iOS 6 to iOS 7 was more than evolutionary and I'm sure you agree the ride wasn't as smooth as Apple claims it was. iOS 7 introduced so many new features, visual changes, and paradigm shifts that the average user, and most developers, we're a bit hesitant to embrace the new direction Apple had taken with iOS.
The announcement of iOS 8, however, is different. The majority of changes introduced in iOS 8 are changes that refine the operating system, integrate it with OS X, and improve the operating system overall stability and usability.
Photos
The Photos application has undergone overhaul, it's now more powerful than ever. Apple briefly showed a version for OS X which will ship early next year. There was no mention of iPhoto or Aperture, and it is unclear whether Photos for OS X will replace them.
With Photos for iOS, you can now search every photo and movie you've ever taken with any iOS device. That's the idea and iCloud, which can now store every photo and movie you take with an iOS device, should make this possible.
With that change, Apple has entered the territory of Dropbox. Even though every photo and movie you take with an iOS device is stored in iCloud, note that this won't be free for everyone. Apple's pricing, however, seems more than reasonable.
Extensions
Starting with iOS 8, an application can have one or more extensions that extend the functionality of an application to other applications, including the operating system. Extensions are Apple's answer to a very common request from both developers and end users.
Extensions come in many forms. An extension can be a simple widget for Notification Center that displays weather data, but it can also be a custom keyboard, another big change for iOS. During the keynote, for example, Federighi showed how a third party application was used to edit a photo in Photos for iOS. The third party application provides the user interface and the integration seemed pretty seamless.
Touch ID
According to Apple, Touch ID is a big success with more than 80% of consumers having it enabled. In iOS 8, Apple opens up the Touch ID API to third party developers. Security remains key, which means that the actual fingerprint information isn't accessible or even exposed to developers.
iCloud and CloudKit
iCloud is still a very important aspect of the iOS and OS X ecosystem for Apple. In fact, the role of iCloud becomes more important with every iteration of iOS and OS X.
Apple's cloud solution has been improved dramatically, both from an end user and a developer perspective. I already mentioned how iCloud can now store every photo and movie you make with your iOS devices, but Apple also enables access to the data that's stored on iCloud by introducing iCloud Drive. It make sharing data between iOS and OS X applications easier and more transparant.
Sending emails with large attachments is no longer a problem with iCloud Drive. As I mentioned earlier, iCloud is free up to 5GB. Additional space costs $0,99 per 20GB per month or $3,99 per month for 200GB. The upper limit is 1TB.
iCloud Drive isn't the only change Apple's made on the server-side, the company also introduced Cloud Kit. With Cloud Kit, Apple takes care of the server-side aspect of iOS application development, letting the developer focus on the iOS application. With Cloud Kit, Apple has entered the BaaS or PaaS market.
HealthKit
Apple also announced HealthKit and Health for iOS. HealthKit is a platform for managing your health and fitness data. The Health application visualizes this data in Apple fashion. Of course, the data Health for iOS shows depends on the input it receives from other applications that collect the data.
An application like Nike+, for example, can share its fitness data through HealthKit and ask for nutrition data that's collected by another application. Apple emphasized that privacy is an important concern. Third party applications can't access your health data without your permission.
HomeKit
It's impressive how many new features, frameworks, and APIs were introduced during this year's keynote. HomeKit is another surprising addition to iOS. The HomeKit framework is an integration between iOS devices and devices that conform to Apple's Home Automation Protocol.
The idea is to bring sanity to the growing market of home automation in which every manufacturer has it's own standards and applications. It doesn't seem Apple is merely testing the waters with HomeKit as the company listed an impressive list of big brands that claim to support HomeKit. Let's hope HomeKit makes home automation less painful and more consistent for iOS users.
Game On
Another big surprise was the introduction of Metal, a low-level API for performing complex graphics on iOS devices. As demonstrated during the keynote, Metal aims to minimize the overhead that OpenGL has by replacing it with an API that not only reduces this overhead, but also increases performance on iOS devices.
SpriteKit, introduced in iOS 7, has received a significant update with per-pixel physics, inverse kinematics, and field forces. In addition, SceneKit, available on OS X for several years, is now also available on iOS.
TestFlight
The number of important announcements was staggering. We already knew that Apple acquired Burstly, the company behind TestFlight, but I wasn't expecting them to offer it as a service of their own so soon. But they did.
TestFlight, the name hasn't changed, will allow the distribution of beta applications through Apple's TestFlight application. The only downside is that it will require iOS 8. However, the acquisition of Burstly seems to have nothing but upsides. For example, each application, not developer account, can have up to 1,000 testers. There is no limit on the number of devices per tester. This really is amazing news and it will make the process much less cumbersome for iOS developers. Application provisioning is also much simpler thanks to TestFlight. If you thought beta distribution was easy with TestFlight, it just got even easier thanks to, well, TestFlight.
Apple wouldn't be Apple if it didn't tightly control the distribution of builds to testers. Based on the updated iOS Developer License Agreement, an application needs to be reviewed by Apple before it can be distributed to testers. How this will happen and how long it will take for Apple to review tens of thousands of test builds is unclear, but, as Ole Begemann points out, it seems that Apple is more lenient when it comes to reviewing test builds. We'll have to wait until the fall to find out how things will pan out.
Swift
The most important announcement of this year's WWDC keynote was, without a doubt, the introduction of Swift, a brand new programming language to develop iOS and OS X applications. Swift's goal is to make development easier, less painful, and more modern. At first glance, Swift is a dynamic programming language that's incredibly expressive thanks to its intuitive, appealing, and syntax.
Swift has no headers, no semicolons, and it supports closures and generics. Functions can have multiple return values and optional arguments. Another focus point of the language is safety. For example, your application won't crash when you access an element of an array that is out of the array's bounds.
Starting from Scratch
Does this mean every Cocoa developer has to start from zero in terms of learning Cocoa development? No. The beauty of Swift is that it integrates nicely with Cocoa and Cocoa Touch.
If you explore some of Apple's code samples, then you'll quickly notice two things. First, the syntax is very easy to learn. It's less verbose compared to C and Objective-C, and more intuitive. Second, Swift leverages existing APIs and frameworks, which means that your knowledge of building iOS and OS X applications will give you a head start if you decide to adopt Swift in your projects.
While there are many features of the Swift language that deserve our attention, I'd like to highlight a few that will take some getting used to if you're an Objective-C developer.
Type Inference
In Swift, types are inferred, which means that you no longer have to declare a variable as an NSString or NSDictionary. The compiler is smart enough to infer the type and it will even optimize your code behind the scenes.
Organization
Say goodbye to header and implementation files. Swift gets rid of header files altogether and I'm sure you don't mind that.
Mind the Semicolon
Like Ruby and CoffeeScript, it's not necessary to end a line of code with a semicolon unless a line contains multiple statements.
Objective-C and C
Swift plays nicely with Objective-C and C. In fact, Swift uses the same runtime Objective-C uses. You can use Swift and Objective-C in the same project without problems. This will make migrating from Objective-C to Swift a bit less of a monumental task.
Xcode 6
Even though Xcode 6 is still in beta, Apple also planned a big release for its integrated development environment. Xcode 6 adds support for Swift, view debugging, improved support for localizing projects, live rendering in Interface Builder, custom iOS fonts, and support for extensions.
This is just a small selection of the new features and improvements of Xcode 6. If you're wondering what Apple has been working on for the past few years, then wonder no more.
Conclusion
I agree with Joshua Topolsky and Craig Hockenberry, the tone of the keynote was incredibly optimistic. Apple is ready to take on its competition and has found its confidence again. Tim Cook didn't miss any opportunity to make fun of Google's Android and show people that Apple is still the leader of the mobile space.
Google I/O is just around the corner and I can't wait to see what Google has in store for us. It's never been a better time to be or become a mobile developer.
It's amazing to think that almost ten years ago, when Mono was officially released, C# developers would have the vast majority of the mobile landscape at their fingertips. I remember exactly where I was. It was the summer of 2004 and I was putting the finishing touches on a fat-client desktop application using the .NET Framework 2.0.
I was creating an application for visualizing corporate data centers in Visio and automatically generating migration plans and checkpoints for virtualizing their environments. It was groundbreaking stuff at the time if you ask me.
I was trying to stay on top of .NET as best I could, so when I heard there was going to be an open version, I thought, "neat". Maybe I could run my application on a Linux machine. But I was working at a Microsoft shop and didn't see much use in it, so I dismissed it for a while.
About a year before Mono went live, the company that created it, Ximian, was purchased by Novell, and work on its products continued. Among these products was Mono. In its time under the umbrella of Novell, Mono continued to be improved closely following the growth and functionality of the .NET Framework through Microsoft.
During this time, two very large advancements in the mobile space regarding Mono arrived, MonoTouch and Mono for Android were released in 2009 and 2011, respectively. Much to the amazement of the .NET community, we could now write mobile apps that targeted the iOS and Android platforms in a language that we were familiar with. Unfortunately, this wasn't immediately met with open arms.
While the Android platform didn't seem to have much trouble with this, Apple on the other hand, wasn't quite as receptive. In mid-2010, Apple updated the terms of their iOS Developer Program that prohibited developers from writing apps in languages other than C, C++, and Objective-C, and restricted any sort of layer between the iOS platform and iOS applications.
This could certainly have spelled disaster for MonoTouch going forward. Luckily, in late 2010, Apple relaxed the language restrictions and the future of MonoTouch looked bright again, even if only briefly.
As the outlook for MonoTouch users began to look bright again, there was another snag. In early 2011, Attachmate acquired Novell and announced hundreds of layoffs of the Novell workforce. Among those layoffs were several of the founders of the original Mono framework as well as the architects and developers of both MonoTouch and Mono for Android. Once again, we became concerned about the future of our ability to create C# apps running on these new platforms.
Hardly a month after being laid off, Miguel de Icaza created a new company named Xamarin and vowed to continue the development and support of Mono. Novell and Xamarin announced that a perpetual license of Mono, MonoTouch, and Mono for Android would be granted and that Xamarin would now officially take over the project. We once again had the keys to the kingdom.
Getting Up and Running
Roughly three years after the creation of Xamarin, we are left with some truly remarkable tools. These tools are not only remarkable for the fact that they allow us to write C# applications that run on non-Microsoft platforms, but they also are extremely easy to get up and running.
Step 1: Installation
To get started, you simply head over to the Xamarin website, sign up for an account if you don't already have one, and visit the download page. With every account, you get a free 30-day trial of the Business Edition of Xamarin, which provides you with everything you need.
In the last several years, the installation process of Xamarin has improved greatly from the days of MonoTouch and Mono for Android. It's a completely self-contained installation that will detect the required software and their versions to get started, including the appropriate version of the Android SDK.
Step 2: Development Environments
In my mind, the most important feature of the Business (and Enterprise) Edition is its support for Visual Studio. This means you can write all of your iOS and Android applications using not only an IDE that you're comfortable with, but also get the added benefit of any other plugins or extensions for Visual Studio you may be using, Resharper for example.
I don't know about you, but I sure get a jolt of excitement when I open up Visual Studio, select File > New Project and stare straight in the face of the options to create an iOS or Android application.
If your 30-day free trial of the Business Edition has expired by the time you read this, you can simply downgrade to the Starter Edition and continue to play with Xamarin. However, there are a couple of drawbacks to the Starter Edition.
You're no longer able to use Visual Studio.
Your application has a size restriction.
If you're simply using the Starter Edition to play around with Xamarin, these restrictions are no big deal. If you're working on the next big app, though, you will need to pony up for the Business or Enterprise Edition. Every edition also comes with a free IDE, Xamarin Studio.
Xamarin Studio is a full-featured IDE that includes many features you also find in Visual Studio, so you definitely don't need to feel shorted in any way if you choose to use Xamarin Studio. I feel very comfortable using it and it's truly a joy to work with.
The nice thing is that the solution and project structures are interchangeable with those of Visual Studio. This means that if you have a license for an edition of Xamarin that allows you to use Visual Studio, you can work on the same solution in either IDE. This enables cross-team development between developers that use either a Windows or Mac based system. There is no need for virtualization software, because Xamarin Studio is available for both Windows and OS X.
Step 3: Configuration
When you get started with Xamarin, it's important to be aware of the configuration options. To get into the basic configuration, select Tools > Options from Visual Studio or Xamarin Studio.
I currently only have Xamarin.Android installed. If you've also installed Xamarin.iOS, you will see more configuration options. From the right-side of the options dialog you will see the following options in Visual Studio.
In Xamarin Studio, similar options are split across the SDK Locations, Debugger, and Android tree view items in the Projects section. Let me walk you through the various configuration options.
Android SDK and NDK Locations
As you may have guessed, these settings are used to set the location of the Android bits on your machine. I typically don't find it necessary to modify these and most often a clean installation of Xamarin—sometimes with an update or two—will download and install all the appropriate versions in the correct locations. If you're a more seasoned Android developer who needs to have access to multiple versions and be able to switch back and forth, you have that ability.
Preserve application data/cache on device between deploys
This is probably the configuration option that I use most. When I'm writing an application that works with local sandbox data on a device or in the emulator, such as files or a database, I will eventually check this box.
During the deployment of an application to a device or the emulator, all existing data-including database files—are removed and must be created again. In the early stages of development, when I want to make sure the database is being created successfully, this is fine. After that point, however, I'll want to work with a populated database and not have to set that data up every time the application is deployed.
Provide debug symbols for shared runtime and base class libraries (Visual Studio only)
During the development cycle, you're deploying your debug build to a device or on the emulator. By default, bundled with your application are your debug symbols that allow you to debug, set breakpoints in your own code, and step through lines while the application is running. This option allows you to also have access to the debug symbols in the shared Mono runtime as well as the base class libraries to give you more information about what's happening in those areas as well as your own code.
This is only used for debugging purposes. A similar option is found in Xamarin Studio. You can find it under the Debugger option as Debug project code only; do not step into framework code. You will need to uncheck this checkbox to step into the Mono framework.
Additional Emulator Launch Arguments
If you have the need to tweak the Android emulator with additional settings that you could typically set manually when running, this option allows you to pass those arguments directly to the emulator through Visual Studio or Xamarin Studio.
Notify me about updates (Visual Studio only)
The Xamarin software is constantly evolving and being updated. It really pays to stay on top of any changes to the version you're currently using as well as what's coming. Here is where you can set which types of updates you want to be notified of. You will be asked to download and install new versions if you check this checkbox.
I typically stay with stable releases for applications that are scheduled for release, but I like to have an alpha or beta to play with. There's a similar option in Xamarin Studio, but in a different location. It can be found under Help > Check for Updates. Here you can choose the Update channel as Stable, Beta, or Alpha, just as in Visual Studio.
Extension debug logging (writes monodroid.log to your desktop) (Visual Studio only)
This option enables device deployment logging. When this option is enabled, Visual Studio directs output from the deployment log to the monodroid.log file found on your desktop. This option is not available in Xamarin Studio, at least not as a configuration.
Xamarin Studio always writes device deployment logs, but they are a little bit more difficult to find. On Windows, you can find them in the \LOCALAPPDATA\XamarinStudio-{VERSION}\Logs folder where VERSIONis the version of Xamarin Studio you're using. The files are created in that folder with the naming convention of AndroidTools-{DATA}__{TIME} where DATEis the current date of the deployment and TIMEis the actual time of the deployment.
Writing Code
Before I address the beauty of this code, and boy is it beautiful, it is important for you to understand that just because you can write iOS and Android applications in C#, doesn't mean you can just write iOS and Android applications in C#. The Xamarin team has done a fantastic job enabling developers with a background in C# to have the abilityto create iOS and Android applications. The problem lies in knowledge. Let me explain this in more detail.
You may have the C# knowledge you need, but unless you have dabbled in iOS or Android development in the past, you don't have the platform knowledge. In order to make Xamarin usable to all levels of iOS and Android developers, the Xamarin team has mapped the language constructs and class structures from Objective-C for iOS and Java for Android into C#.
So what does that mean? It means that you at least need to have a basic understanding of the iOS and Android programming model and SDKs to be able to take advantage of Xamarin. That means that the AVFoundation class in Objective-C is the MonoTouch.AVFoundation class in C#. The Activity class in Java is the Android.App.Activity class in C#.
If you don't have any experience with iOS or Android, then don't let your lack of knowledge deter you from using Xamarin. You don't need to spend months, days, or even hours on the iOS Dev Center or the Android Developer website. The point is that becoming familiar with the platform you're developing for is more than worth your time if your ambition is to create a high quality product.
My suggestion is to go straight to the Xamarin Developer Center and get up and running quickly. You will find documentation, sample applications, tutorials, videos, and API references. Everything after getting started is simply researching how to accomplish certain tasks. Once you get a good handle on the APIs and the development flow, you can go back to the iOS and Android resources to get a more in depth knowledge of the platforms.
Let's Create an Application
Now that you have the necessary tools downloaded and installed, let's take them for a spin. To follow along, you can use either Visual Studio or Xamarin Studio, because I'll be focusing on the code, not the IDE. For this tutorial, I'll be using Visual Studio 2013 running on Windows, but you're free to use either IDE on Windows or Xamarin Studio on OS X. We will be creating a simple Android application that will read the current news feed from Xamarin and we'll call it XamFeed.
Create A Xamarin.Android Project
Start as you would with any other C# application by creating a new project/solution and naming it XamFeed. Typical naming conventions for an application like this would append .Android to the end of the name. This is to differentiate the name of this application from any other platform specific version you may create later (like .iOS, .Mac, .WindowsPhone, etc).
This will be a very simple application, so we will keep the name simple as well. You can choose any of the Android templates you want, Android Application, Android Honeycomb Application, or Android Ice Cream Sandwich Application. These just set the base version of Android that our application will target. I will use the basic Android Application template.
Write Some Code
In the Solution Explorer, open the MainActivity class, which will be the main entry point of our application. I like to rename this Activity to better represent the purpose it will serve, so go ahead and rename it to FeedActivity.
If you're unfamiliar with activities, think of an Activity as a screen or view within your Android application. Each screen you need in your application will have a corresponding class that inherits from the Activity base class.
In the FeedActivity class, you have the ability to override a number of methods that are provided for you out of the box. The only one that we are concerned about at the moment is the OnCreate method that will be called when our screen is created and accessible to the user.
The first thing we'll do is create a new class that represents the feed. You can obviously expand upon this, but all we need for now is the Title, PubDate, Creator, and Link to the item's content.
public class RssItem
{
public string Title { get; set; }
public string PubDate { get; set; }
public string Creator { get; set; }
public string Link { get; set; }
}
We can now change the implementation of the OnCreate method within our MainActivity class to get the data from the Xamarin feed. Replace the OnCreate implementation with the following:
[Activity(Label = "XamFeed", MainLauncher = true, Icon = "@drawable/icon")]
public class FeedActivity : ListActivity
{
private RssItem[] _items;
protected async override void OnCreate(Bundle bundle)
{
base.OnCreate(bundle);
using (var client = new HttpClient())
{
var xmlFeed = await client.GetStringAsync("http://blog.xamarin.com/feed/");
var doc = XDocument.Parse(xmlFeed);
XNamespace dc = "http://purl.org/dc/elements/1.1/";
_items = (from item in doc.Descendants("item")
select new RssItem
{
Title = item.Element("title").Value,
PubDate = item.Element("pubDate").Value,
Creator = item.Element(dc + "creator").Value,
Link = item.Element("link").Value
}).ToArray();
ListAdapter = new FeedAdapter(this, _items);
}
}
protected override void OnListItemClick(ListView l, View v, int position, long id)
{
base.OnListItemClick(l, v, position, id);
var second = new Intent(this, typeof(WebActivity));
second.PutExtra("link", _items[position].Link);
StartActivity(second);
}
}
Let's walk through this code snippet line by line to see what's going on.
The ActivityAttribute that decorates the FeedActivity class is the mechanism that Xamarin.Android uses to let the target device or the emulator know that this is an Activity (or screen) that is accessible within the application. This is required for all Activity classes within your application.
private RssItem[] _items;
We are going to save all the feed items that we pull from the Xamarin website in a variable to prevent us from constantly making HTTP requests. You may want to handle this differently depending on whether or not you want to update this screen later with new content. In our simple application, we won't do this.
Next, we override the OnCreate method that's exposed through the Activity base class in our FeedActivity class. This method is called every time this Activity is instantiated by the. As you can see, we can also use the new C# 5.0 async/await feature to make this method asynchronous.
base.OnCreate(bundle);
Make sure to call the base.OnCreate method on the base Activity class. This will ensure that any processing the base class does during the OnCreate method will continue to run.
using (var client = new HttpClient())
To fetch the RSS data from the Xamarin website, we're going to use the HttpClient class as it provides a number of convenient asynchronous methods to retrieve data over HTTP.
var xmlFeed = await client.GetStringAsync("http://blog.xamarin.com/feed/");
var doc = XDocument.Parse(xmlFeed);
We then invoke the GetStringAsync method on the HttpClient class to retrieve the feed data and Parse it into an XDocument object to do some Linq2XML magic.
XNamespace dc = "http://purl.org/dc/elements/1.1/";
_items = (from item in doc.Descendants("item")
select new RssItem
{
Title = item.Element("title").Value,
PubDate = item.Element("pubDate").Value,
Creator = item.Element(dc + "creator").Value,
Link = item.Element("link").Value
}).ToArray();
To correctly retrieve elements from the resulting XDocument object, we need to create an instance of the XNamespace class that represents some of the custom formatting used within the Xamarin RSS feed. We can then run a LINQ query against the XDocument to pull all the items out and create new instances of the RssItem class based on the item properties.
ListAdapter = new FeedAdapter(this, _items);
Finally, we use a custom adapter to populate the ListView of the FeedActivity, which is defined in the Main.axml document in the Resources/Layout folder. Think of adapters in Android as a mechanism to provide some customized formatting of elements or widgets within the user interface. All user interface components that use adapters, such as a ListView, use default adapters if you don't specify one explicitly, but you can always replace them with your own.
The final piece of the puzzle for the FeedActivity class is to override the OnListItemClick method so that we can open up a new Activity that shows us the actual content of the individual feed items that we touch.
base.OnListItemClick(l, v, position, id);
Once again, we call the base class method to be sure that all normal processing is being carried out.
var second = new Intent(this, typeof(WebActivity));
second.PutExtra("link", _items[position].Link);
StartActivity(second);
We now follow the Android design pattern for passing data to a new Activity. This will become very familiar to you as you create more applications that involve multiple screens. We create a new Intent object, which is the Android way of passing data to a new Activity. We pass it two objects that represent the context of where the call is originating, this, and the Type object to where it is going.
Once we have the new Intent object, we put things, typically strings, into it and pass it on. In this case, we use the PutExtra method to add a key/value pair to the Intent, and start the transition process to the WebActivity screen with the StartActivity method.
Based on the code involved in the creating of the FeedActivity screen, we now need to create a FeedAdapter class that populates and formats the RssItem data into our ListView and a WebActivity class to represent the next screen. Let's start with the FeedAdapter class.
public class FeedAdapter : BaseAdapter<RssItem>
{
private RssItem[] _items;
private Activity _context;
public FeedAdapter( Activity context, RssItem[] items) : base()
{
_context = context;
_items = items;
}
public override RssItem this[int position]
{
get { return _items[position]; }
}
public override int Count
{
get { return _items.Count(); }
}
public override long GetItemId(int position)
{
return position;
}
public override View GetView(int position, View convertView, ViewGroup parent)
{
var view = convertView;
if (view == null)
{
view = _context.LayoutInflater.Inflate(Android.Resource.Layout.SimpleListItem2, null);
}
view.FindViewById<TextView>(Android.Resource.Id.Text1).Text = _items[position].Title;
view.FindViewById<TextView>(Android.Resource.Id.Text2).Text = string.Format("{0} on {1}", _items[position].Creator, _items[position].PubDate);
return view;
}
}
Yikes. That's a lot of code. It's actually fairly simple though. We need to override four methods/properties on the base class, BaseAdapter. In our case, the generic parameter is going to be our RssItem class. The first three are fairly self explanatory.
this[int position] returns an RssItem at the given position in the array.
Count returns the number of RssItem objects in the array.
GetItemId returns the Id of an RssItem at a given position, the position in our example.
The last, and slightly more complicated, override is the GetView method. This method gets an instance of the ListView within our Activity and Inflate it as a SimpleListItem2, which is a type of ListItem view within Android that allows two rows of text in a single item. We then set the first row of text to the RssItem.Title property and the second row of text to a concatenation of the RssItem.Creator property and the RssItem.PubDate property.
With the adapter set, we can focus on the second screen of our application, WebActivity.
The structure is similar to the one of the FeedActivity class. We are once again using the ActivityAttribute to decorate the WebActivity class. There are only three slightly different lines in this method that we didn't encounter before.
SetContentView(Resource.Layout.WebActivity);
The SetContentView method is a nice helper method that will map our C# Activity class to the specified layout file. In this example, we are referencing the WebActivity.axml file.
The last two lines are specific to the WebView control within our layout. We use the FindViewById method to get a reference to the specified WebView control and call the LoadUrl method and pass it the data that was sent to this Activity, via an Intent, from the FeedActivity class.
The last pieces of the puzzle are the layout files that define the placing and naming of controls on the individual screens. The first one is the Main.axml file in the Resources/Layout folder in your solution. Simply replace its contents with the following:
Once you've completed creating all the pieces of this application, you should be able to successfully build the application and deploy it to the Android Emulator. The build process is just like any other application you've created in Visual Studio. You can debug your application by pressing F5 or run your application using Control-F5. The only difference is that you have a number of deployment options you can configure. For this tutorial, we are interested in running the application in the Android Emulator, but if you have a physical device you can run your application on there as well.
In the Xamarin.Android toolbar, you have several different options. You have a drop-down menu that lets you specify the Android version the emulator should run. For this application, I have chosen to run on the latest version of Android, Android_API_19 or KitKat at the time of writing.
If you don't have the latest version of the SDK like I do here, you can open the Android SDK Manager and download the version you'd like to run your application on. If you open the Android SDK Manager, you can choose from a plethora of different Android SDK versions and a few additional tools.
You also have the option to configure the available emulators or create your own. This is done through the Android Emulator Manager in which you can create, edit, copy, and delete emulators.
By clicking the New or Edit button on the right, you are presented with a dialog through which you can configure the emulator.
Once everything's configured the way you like, it's time for the moment of truth, running your Android application. Press F5 and wait for the emulator to launch, which can take some time. It's therefore a good idea to leave the emulator open, so you don't have to wait for it to start up every time you deploy your application. Once your application is running in the emulator, you should see something like this.
Your view may differ slightly, depending on how you have configured the emulator. Tapping or clicking one of the titles should take you to a web view within your application that looks similar to the one below.
Conclusion
There you have it. You have successfully created an Android application using Visual Studio, C#, and a little help from your friends at Xamarin.
From here, you can take a number of steps. You can tailor this application to make it completely your own or leave it as is and impress your friends. Either way, you've taken a big step into the world of non-Microsoft mobile development using Microsoft tools. That's pretty cool in itself if you ask me.
Next time, we'll tackle the world of iOS development using Xamarin.iOS, which is a very similar process with only a few differences. Excited? I know I am. Until next time and happy coding.
In this tutorial, you'll learn how to create a mobile 2D game using C# and Unity. We'll be taking advantage of the Dolby Audio Plugin for Unity to enhance the game's audial experience. The objective of the game is simple, reaching the other side of the level while avoiding enemies and collecting coins.
In this tutorial, you will learn the following aspects of Unity game development:
setting up a 2D project in Unity
creating Prefabs
movement and action buttons
working with physics collisions
using a sprite sheet
integrating the Dolby Audio API
1. Create a New Unity Project
Open Unity and select New Project from the File menu to open the new project dialog. Tell Unity where you want to save the project and set theSet up defaults for: drop-down menu to 2D.
2. Build Settings
In the next step, you're presented with Unity's user interface. Set the project up for mobile development by choosing Build Settings from the File menu and select Android as the target platform.
3. Devices
Since we're about to create a 2D game, the first thing we need to do after selecting the target platform is choosing the size of the artwork that we'll use in the game. Because Android is an open platform, there's a wide range of devices, screen resolutions, and pixel densities available on today's market. A few of the more common ones are:
Samsung Galaxy SIII: 720px x 1280px, 306 ppi
Asus Nexus 7 Tablet: 800px x 1280px, 216 ppi
Motorola Droid X: 854px x 480px, 228 ppi
Even though we'll be focusing on the Android platform in this tutorial, you can use the same code to target any of the other platforms that Unity supports.
4. Export Graphics
Depending on the devices you're targeting, you may need to convert the artwork for the game to the recommended size and pixel density. You can do this in your favorite image editor. I've used the Adjust Size... function under the Tools menu in OS X's Preview application.
5. Unity User Interface
Before we get started, make sure to click the 2D button in the Scene panel. You can also modify the resolution that's being displayed in the Game panel.
6. Game Interface
The interface of our game will be straightforward. The above screenshot gives you an idea of the artwork we'll be using and how the final game interface will end up looking. You can find the artwork for this tutorial in the source files of this tutorial.
7. Programming Language
You can use one of three programming languages when using Unity, C#, UnityScript, a variation of JavaScript, and Boo. Each of these programming languages has its pros and cons and it's up to you to decide which one you prefer. My personal preference goes to the C# programming language so that's the language I'll be using in this tutorial.
If you decide to use another programming language, make sure to take a look at Unity's Script Reference for examples.
8. 2D Graphics
Unity has built a name for being a great platform for creating 3D games for various platforms, such as Microsoft's Xbox 360, Sony's PS3, Nintendo's Wii, the web, and various mobile platforms.
While it's always been possible to use Unity for 2D game development, it wasn't until the release of Unity 4.3 that it included native 2D support. We'll learn how to work with images as sprites instead of textures in the next steps.
9. Sound Effects
I'll use a number of sounds to create a great audial experience for the game. The sound effects used in this tutorial were obtained from as3sfxr and PlayOnLoop.
10. Import Assets
Before we start coding, we need to add our assets to the Unity project. You can do this one of several ways:
select Import New Asset from the Assets menu
add the items to the assets folder in your project
drag and drop the assets in the project window
After completing this step, you should see the assets in your project's Assets folder in the Project panel.
11. Create Scene
We're ready to create the scene of our game by dragging objects to the Hierarchy or Scene panel.
12. Background
Start by dragging and dropping the background into the Hierarchy panel. It should automatically appear in the Scene panel.
Because the Scene panel is set to display a 2D view, you'll notice that selecting the Main Camera in the Hierarchy shows a preview of what the camera is going to display. You can also see this in the Game view. To make the entire scene visible, change the Size value of the Main Camera to 1.58 in the Inspector panel.
13. Floor
The floor is used to keep our main character from falling once we've added physics to the game. Drag it from the Assets folder and position it in the scene as shown below.
14. Floor Collider
In order to make the floor detect when the character is touching it, we need to add a component, a Box Collider 2D to be precise.
Select the floor in the scene, open the Inspector panel, and click Add Component. From the list of components, select Box Collider 2D from the Physics 2D section.
15. Jump Button
We'll use buttons to control our main character in the game. Drag and position the jump button in the Scene and add a Circle Collider2D component as shown in the previous step.
16. Jump Sound
To play a sound when the character jumps, we first need to attach it to the jump button. Select it from the Hierarchy or Scene view, click the Add Component button in the Inspector panel, and select Audio Source in the Audio section.
Uncheck Play on Awake and click the little dot on the right, below the gear icon, to select the sound we want to play when the player taps the button. In the next step, we'll implement the logic for playing the sound when the player taps the button.
17. Jump Script
Let's create the script that will control our character. Select the jump button and click the Add Component button in the Inspector panel. Select New Script and name it Jump. Don't forget to change the language to C#.
Open the newly created file and add the following code snippet.
using UnityEngine;
using System.Collections;
public class Jump : MonoBehaviour
{
public float jumpForce;
private GameObject hero; //used to reference our character (hero) on the scene
// Use this for initialization
void Start()
{
hero = GameObject.Find("Hero"); //gets the hero game object
}
// Update is called once per frame
void Update()
{
/* Check if the user is touching the button on the device */
if (Application.platform == RuntimePlatform.Android)
{
if (Input.touchCount > 0)
{
if (Input.GetTouch(0).phase == TouchPhase.Began)
{
CheckTouch(Input.GetTouch(0).position, "began"); // function created below
} else if (Input.GetTouch(0).phase == TouchPhase.Ended)
{
CheckTouch(Input.GetTouch(0).position, "ended");
}
}
}
/* Check if the user is touching the button on the Editor, change OSXEditor value if you are on Windows */
if (Application.platform == RuntimePlatform.OSXEditor)
{
if (Input.GetMouseButtonDown(0))
{
CheckTouch(Input.mousePosition, "began");
}
if (Input.GetMouseButtonUp(0))
{
CheckTouch(Input.mousePosition, "ended");
}
}
}
void CheckTouch(Vector3 pos, string phase)
{
/* Get the screen point where the user is touching */
Vector3 wp = Camera.main.ScreenToWorldPoint(pos);
Vector2 touchPos = new Vector2(wp.x, wp.y);
Collider2D hit = Physics2D.OverlapPoint(touchPos);
/* if button is touched... */
if (hit.gameObject.name == "JumpButton" && hit && phase == "began")
{
hero.rigidbody2D.AddForce(new Vector2(0f, jumpForce)); //Add jump force to hero
audio.Play(); // play audio attached to this game object (jump sound)
}
}
}
The code snippet may seem daunting, but it's actually pretty straightforward. We first get a reference to the hero object, an instance of the GameObject class, so we can use it later. We then detect if the user is touching the jump button and, if they are, add a force to the hero object. Last but not least, we play the jump sound when the jump button is tapped.
18. Movement Buttons
The steps to add and implement the movement buttons, left and right, are very similar. Start by placing the buttons in the scene and add a Circle Collider 2D to each button like we did with the jump button.
19. Movement Scripts
Create a new script, attach it to the left button, and name it MoveLeft. Replace its contents with the following code snippet, which contains the MoveLeft method.
using UnityEngine;
using System.Collections;
public class MoveLeft : MonoBehaviour
{
public Vector3 moveSpeed = new Vector3();
private bool moving = false;
private GameObject[] scene; //array of game objects that conform the scene
private GameObject bg;
// Use this for initialization
void Start()
{
scene = GameObject.FindGameObjectsWithTag("Moveable"); //Game objects with Moveable tag
bg = GameObject.Find("Background"); //Game Background
}
// Update is called once per frame
void Update()
{
/* Detect touch */
if (Application.platform == RuntimePlatform.Android)
{
if (Input.touchCount > 0)
{
if (Input.GetTouch(0).phase == TouchPhase.Began)
{
CheckTouch(Input.GetTouch(0).position, "began");
} else if (Input.GetTouch(0).phase == TouchPhase.Ended)
{
CheckTouch(Input.GetTouch(0).position, "ended");
}
}
}
if (Application.platform == RuntimePlatform.OSXEditor)
{
if (Input.GetMouseButtonDown(0))
{
CheckTouch(Input.mousePosition, "began");
}
if (Input.GetMouseButtonUp(0))
{
CheckTouch(Input.mousePosition, "ended");
}
}
// Move if button is pressed
if (moving && bg.transform.position.x < 4.82f)
{
for (int i = 0; i < scene.Length; i++)
{
if (scene [i] != null)
{
scene [i].transform.position += moveSpeed;
}
}
}
}
void CheckTouch(Vector3 pos, string phase)
{
Vector3 wp = Camera.main.ScreenToWorldPoint(pos);
Vector2 touchPos = new Vector2(wp.x, wp.y);
Collider2D hit = Physics2D.OverlapPoint(touchPos);
if (hit.gameObject.name == "LeftButton" && hit && phase == "began")
{
moving = true;
}
if (hit.gameObject.name == "LeftButton" && hit && phase == "ended")
{
moving = false;
}
}
}
In this script, we create an array of the elements tagged as Moveable—we'll tag them later in this tutorial—to make it easier to move them all at once. To move the elements, we first check if the button is being touched and change the position using moveSpeed in the Update function. It's as simple as that.
Create another script, attach it to the right button, and name it MoveRight. This script contains the MoveRight method and its implementation is similar to that of the MoveLeft method we saw a moment ago. We change the direction of the movement by replacing += moveSpeed with -= moveSpeed. This will move the scene in the opposite direction.
In the MoveRight script, we also check if the player has completed the level.
using UnityEngine;
using System.Collections;
public class MoveRight : MonoBehaviour
{
public Vector3 moveSpeed = new Vector3();
private bool moving = false;
private GameObject[] scene;
private GameObject bg;
public AudioClip completeSound;
private GameObject[] buttons;
private GameObject completeText;
private bool ended = false;
public Font goodDog;
// Use this for initialization
void Start()
{
scene = GameObject.FindGameObjectsWithTag("Moveable");
bg = GameObject.Find("Background");
buttons = GameObject.FindGameObjectsWithTag("Buttons");
}
// Update is called once per frame
void Update()
{
if (Application.platform == RuntimePlatform.Android)
{
if (Input.touchCount > 0)
{
if (Input.GetTouch(0).phase == TouchPhase.Began)
{
CheckTouch(Input.GetTouch(0).position, "began");
} else if (Input.GetTouch(0).phase == TouchPhase.Ended)
{
CheckTouch(Input.GetTouch(0).position, "ended");
}
}
}
if (Application.platform == RuntimePlatform.OSXEditor)
{
if (Input.GetMouseButtonDown(0))
{
CheckTouch(Input.mousePosition, "began");
}
if (Input.GetMouseButtonUp(0))
{
CheckTouch(Input.mousePosition, "ended");
}
}
// Move if button is pressed && stage is not over
if (moving && bg.transform.position.x > -4.8f)
{
for (int i = 0; i < scene.Length; i++)
{
if (scene [i] != null)
{
scene [i].transform.position -= moveSpeed;
}
}
}
// Stage Completed
if (bg.transform.position.x <= -4.8f && ended == false)
{
Alert("complete");
}
}
void CheckTouch(Vector3 pos, string phase)
{
Vector3 wp = Camera.main.ScreenToWorldPoint(pos);
Vector2 touchPos = new Vector2(wp.x, wp.y);
Collider2D hit = Physics2D.OverlapPoint(touchPos);
if (hit.gameObject.name == "RightButton" && hit && phase == "began")
{
moving = true;
}
if (hit.gameObject.name == "RightButton" && hit && phase == "ended")
{
moving = false;
}
}
public void Alert(string action)
{
ended = true;
completeText = new GameObject();
completeText.AddComponent("GUIText");
completeText.guiText.font = goodDog;
completeText.guiText.fontSize = 50;
completeText.guiText.color = new Color(255, 0, 0);
if (action == "complete")
{
AudioSource.PlayClipAtPoint(completeSound, transform.position);
completeText.guiText.text = "Level Complete!";
completeText.guiText.transform.position = new Vector3(0.24f, 0.88f, 0);
} else
{
completeText.guiText.text = "Game Over";
completeText.guiText.transform.position = new Vector3(0.36f, 0.88f, 0);
}
bg.GetComponent().Stop();
for(int i = 0; i < buttons.Length; i++)
{
buttons[i].renderer.enabled = false;
Invoke("restart", 2);
}
}
void restart()
{
Application.LoadLevel(Application.loadedLevel);
}
}
The Alert function creates and displays a message to the player and plays the sound attached to the background sprite. For this to work, add the corresponding sound to the background sprite as we saw earlier in this tutorial. We also hide the buttons and restart the game with a delay of two seconds.
20. Sprite Sheet
We'll use a sprite sheet for the rest of the game elements. Unity has a sprite editor that makes using sprites a breeze. The artwork used in this tutorial was obtained from OpenGameArt.org.
Import the artwork, select it from the Assets panel, and change the Sprite Mode option to Multiple in the Inspector panel.
Open the Sprite Editor by clicking the button below and select Slice > Automatic.
21. Hero
With the sprite sheet sliced and ready to use, click the arrow that appears when the sprite sheet is selected and choose the sprite for the hero, the main character of our game. Place it on the scene and add a Collider 2D component to it.
22. Hero RigidBody 2D
To detect a collision with our hero, at least one of the colliding objects needs to have a RigidBody 2D component attached to it. To add one to our hero, select Add Component in the Inspector panel, followed by Physics 2D> RigidBody 2D.
Check the Fixed Angle box to prevent the hero from rotating if a collision occurs.
23. Hero Sound
When our hero is hit by an enemy, we play another sound to give the player feedback. If you've ever played Super Mario Bros., then you probably know what effect we're after. To accomplish this, we first need to add the sound. Select it from the Hierarchy or Scene view, click the Add Component button in the Inspector panel, and select Audio Source in the Audio section.
The details of the audio component will show up in the Inspector Panel. Click the dot below the gear icon and select the hit sound.
24. Collecting Coins
As in many traditional 2D platformers, you can collect coins in our game. Because we'll use this object multiple times in the game, we'll convert it to a Prefab once we've added all the necessary components.
Drag the coin from the Assets folder and add a Collider2D as we saw in the previous steps.
25. Coin Sound
We play a sound whenever our hero collects a coin. Add an Audio Source component as we saw a moment ago and select the coin sound from the project's assets.
26. Coin Script & Prefab
Attach this simple script to the coin. It detects when the coin and the hero collide. The coin is destroyed and a sound is played to indicate that the coin has been collected by the hero.
using UnityEngine;
using System.Collections;
public class GrabCoin : MonoBehaviour
{
void OnTriggerEnter2D(Collider2D other)
{
if (other.gameObject.name == "Hero")
{
audio.Play();
Destroy(gameObject.collider2D);
gameObject.renderer.enabled = false;
Destroy(gameObject, 0.47f);// Destroy the object -after- the sound played
}
}
}
With all the components in place, drag the coin from the Hierarchy panel to the Assets panel to convert it to a Prefab. You'll notice the text becomes blue indicating it's now a Prefab.
27. Enemy
Let's not forget the enemies of the game. Drag the artwork for the enemy from the Assets folder and add two Collider 2D components as shown in the screenshot below.
The colliders are reduced in size to prevent the hero from colliding with both colliders at once. Change the settings of each Collider 2D component as below.
The first collider in the panel is the topmost collider that we've added to the enemy. It will detect if the hero jumps on top of the enemy and destroys it. The logic for this action is shown in the script below.
We mark the second collider as a trigger by checking the checkbox labeled Is Trigger. It detects when the enemy runs into the hero or vice versa. When that happens, the player loses the game.
The script attached to the enemy is shown below and implements the logic we just discussed. As you can see, the enemy is moved to the left in every frame and the script detects when the hero jumps on top of the enemy or when the hero runs into the enemy.
using UnityEngine;
using System.Collections;
public class Enemy : MonoBehaviour
{
public Vector3 moveSpeed;
public AudioClip hitSound;
public GameObject alertBridge;
// Use this for initialization
void Start()
{
}
// Update is called once per frame
void Update()
{
transform.position -= moveSpeed; //Move the enemy to the left
}
void OnCollisionEnter2D(Collision2D other) //Hero jumps on enemy
{
if (other.gameObject.name == "Hero")
{
AudioSource.PlayClipAtPoint(hitSound, transform.position);
Destroy(gameObject);
}
}
void OnTriggerEnter2D(Collider2D other) //hero hits side of enemy
{
if (other.gameObject.name == "Hero")
{
other.gameObject.audio.Play(); //Play audio
Destroy(other.gameObject.collider2D); //Remove collider to avoid audio replaying
other.gameObject.renderer.enabled = false; //Make object invisible
Destroy(other.gameObject, 0.626f); //Destroy object when audio is done playing, destroying it before will cause the audio to stop
alertBridge.GetComponent().Alert("gameover");
}
}
}
28. Bricks
Bricks are used as platforms. The hero can jump on the bricks to avoid enemies and collect coins. Drag the brick artwork from the Assets panel and add a Collider 2D component to it. Don't forget to convert it to a Prefab, because it will be used quite a bit in the game.
29. The End
We'll use a sprite to show the finish line of the level. Drag it from the Assets panel to the Scene as shown in the screenshot below.
30. Dolby Audio Plugin
Let's enhance the audial experience of our game by using the Dolby Audio Plugin for Unity. However, let me first explain why you should be using the Dolby Audio Plugin and how it will improve your game.
Dolby Digital Plus is an advanced audio solution built into many mobile devices including tablets. Mobile applications can leverage the Dolby Digital Plus capabilities via API. Some of the benefits include Audio Optimization, Volume Maximization, and Volume Leveling. Dolby has made its API available for several platforms, including Android and Kindle Fire. In our game, we will take advantage of the Dolby Audio Plugin for Unity.
Note that the plugin for Unity is free to use and very easy to integrate. In other words, there's no reason not to include it in your next game.
Start by downloading Dolby's Unity plugin. You can download it from the Unity Asset Store or directly from Dolby's developer website. If you choose the latter option, then create a free account to download the plugin or log in if you already have a Dolby developer account. Extract the package and copy the version you need to Assets > Plugins > Android. That's how easy it is to install the plugin for Unity.
Create a new script and attach it to an object that is always present in the game like the background or the camera. Name the script Dolby and populate it with the following code snippet.
using UnityEngine;
using System.Collections;
using System.Runtime.InteropServices; //Allows us to use DLLImport
public class Dolby : MonoBehaviour
{
private GameObject debugText;
public Font arial;
/* Import plugin functions */
[DllImport("DSPlugin")]
public static extern bool isAvailable();
[DllImport("DSPlugin")]
public static extern int initialize();
[DllImport("DSPlugin")]
public static extern int setProfile(int profileid);
[DllImport("DSPlugin")]
public static extern int suspendSession();
[DllImport("DSPlugin")]
public static extern int restartSession();
[DllImport("DSPlugin")]
public static extern void release();
// Use this for initialization
void Start()
{
/* Textfield created for feedback */
debugText = new GameObject();
debugText.AddComponent("GUIText");
debugText.guiText.font = arial;
debugText.guiText.fontSize = 14;
debugText.guiText.color = new Color(255, 0, 0);
debugText.transform.position = new Vector3(0, 1, 0);
/* Initialize Dolby if Available */
if (isAvailable())
{
Invoke(Init, 0.1f); // Wait 100ms to make sure Dolby service is enabled
}
else
{
debugText.guiText.text = "Dolby Sound Not Available";
}
}
void Init()
{
debugText.guiText.text = "Dolby Sound Available";
setProfile(2); /* Set Profile to "Game" */
initialize();
}
void OnApplicationPause()
{
suspendSession();// Dolby sound stops if app switched or paused
}
void OnApplicationFocus()
{
restartSession(); // Restart Dolby sound if app is active
}
void OnApplicationQuit()
{
release(); //Stops Dolby Sound completely
}
}
I'm sure you agree that it's very easy to integrate the Dolby Audio API into your game. We first create a debugText object, which is of type GameObject, to receive feedback from the device. We then import the necessary functions defined by the Dolby Audio API and initialize the Dolby Audio API if the user's device supports it.
To ensure that the Dolby service is enabled, we briefly wait (0.1s) before calling the initialize() method. If we don't do this, there's a chance you receive a -1 error, which can happen when you try to set Dolby on when the service is getting established.
Dolby has also included functions to suspend and restart the sound when needed, which is useful when we switch to another application and we don't need the sound enhancement. This is important to conserve battery power and other device resources. We can also stop the sound enhancement completely by invoking release as we do in OnApplicationQuit.
31. Testing
It's time to test the game. Press Command-P to play the game in Unity. If everything works as expected, you are ready for the final steps.
32. Player Settings
When you're happy with your game, it's time to select Build Settings from the File menu and click the Player Settings button. This should bring up the Player Settings in the Inspector panel where you can set the parameters for your application.
These settings are application specific and include the creator or company, application resolution, display mode, etc. These settings depend on the devices you're targeting and the stores or markets you will be publishing your game on.
33. Icons and Splash Images
Using the graphics you created earlier, you can now create a nice icon and a splash image for your game. Unity shows you the required sizes, which depend on the platform you're building for.
34. Build and Play
Once your project is properly configured, it's time to revisit the Build Settings and click the Build button. That's all it takes to build your game for testing and/or distribution.
Conclusion
In this tutorial, we've learned about the new Dolby Audio Plugin for Unity, sprite sheets, controls, collision detection, and other aspects of game development using Unity. I encourage you to experiment with the result and customize the game to make it your own. I hope you liked this tutorial and found it helpful.
RubyMotion is a fantastic framework for building performant iOS applications using the Ruby language. In the first part this tutorial, you learned how to set up and implement a RubyMotion application. You worked with Interface Builder to create the application's user interface, implemented a view controller, and learned how to write tests for your application.
In this tutorial, you'll learn about the Model-View-Controller or MVC design pattern and how you can use it to structure your application. You'll also implement a painting view and add a gesture recognizer that allows the user to draw on the screen. When you're done, you'll have a complete, fully-working application.
1. Model-View-Controller
Apple encourages iOS developers to apply the Model-View-Controller design pattern to their applications. This pattern breaks classes into one of three categories, models, views, and controllers.
Models contain your application's business logic, the code that determines the rules for managing and interacting with data. Your model is where the core logic for you application lives.
Views display information to the user and allow them to interact with the application.
Controllers are responsible for tying the models and views together. The iOS SDK uses view controllers, specialized controllers with a little more knowledge of the views than other MVC frameworks.
How does MVC apply to your application? You've already started implementing the PaintingController class, which will connect your models and views together. For the model layer, you'll add two classes:
Stroke This class represents a single stroke in the painting.
Painting This class represents the entire painting and contains one or more strokes.
For the view layer, you'll create a PaintingView class that is responsible for displaying a Painting object to the user. You'll also add a StrokeGestureRecongizer that captures touch input from the user.
2. Strokes
Let's start with the Stroke model. A stroke will consist of a color and several points representing the stroke. To start, create a file for the Stroke class, app/models/stroke.rb, and another one for its spec, spec/models/stroke.rb.
Next, implement the stroke class skeleton and a constructor.
class Stroke
attr_reader :points, :color
end
The Stroke class has two attributes, points, a collection of points, and color, the color of the Stroke object. Next, implement a constructor.
class Stroke
attr_reader :points, :color
def initialize(start_point, color)
@points = [ start_point ]
@color = color
end
end
That looks great so far. The constructor accepts two arguments, start_point and color. It sets points to an array of points containing start_point and color to the provided color.
When a user swipes their finger across the screen, you need a way to add points to the Stroke object. Add the add_point method to Stroke.
def add_point(point)
points << point
end
That was easy. For convenience, add one more method to the Stroke class that returns the start point.
def start_point
points.first
end
Of course, no model is complete without a set of specs to go along with it.
describe Stroke do
before do
@start_point = CGPoint.new(0.0, 50.0)
@middle_point = CGPoint.new(50.0, 100.0)
@end_point = CGPoint.new(100.0, 0.0)
@color = UIColor.blueColor
@stroke = Stroke.new(@start_point, @color)
@stroke.add_point(@middle_point)
@stroke.add_point(@end_point)
end
describe "#initialize" do
before do
@stroke = Stroke.new(@start_point, @color)
end
it "sets the color" do
@stroke.color.should == @color
end
end
describe "#start_point" do
it "returns the stroke's start point" do
@stroke.start_point.should == @start_point
end
end
describe "#add_point" do
it "adds the points to the stroke" do
@stroke.points.should == [ @start_point, @middle_point, @end_point ]
end
end
describe "#start_point" do
it "returns the start point" do
@stroke.start_point.should == @start_point
end
end
end
This should start to feel familiar. You've added four describe blocks that test the initialize, start_point, add_point, and start_point methods. There's also a before block that sets a few instance variables for the specs. Notice the describe block for #initialize has a before block that resets the @stroke object. That's fine. With specs, you don't have to be as concerned with performance as you do with a regular application.
3. Drawing
It's the moment of truth, it's time to make your application draw something. Start by create a file for the PaintingView class at app/views/painting_view.rb. Because we're doing some specialized drawing, the PaintingView class is tricky to test. For the sake of brevity, I'm going to skip the specs for now.
Next, implement the PaintingView class.
class PaintingView < UIView
attr_accessor :stroke
def drawRect(rectangle)
super
# ensure the stroke is provided
return if stroke.nil?
# set up the drawing context
context = UIGraphicsGetCurrentContext()
CGContextSetStrokeColorWithColor(context, stroke.color.CGColor)
CGContextSetLineWidth(context, 20.0)
CGContextSetLineCap(context, KCGLineCapRound)
CGContextSetLineJoin(context, KCGLineJoinRound)
# move the line to the start point
CGContextMoveToPoint(context, stroke.start_point.x, stroke.start_point.y)
# add each line in the path
stroke.points.drop(1).each do |point|
CGContextAddLineToPoint(context, point.x, point.y)
end
# stroke the path
CGContextStrokePath(context);
end
end
Phew, that's a lot code. Let's break it down piece by piece. The PaintingView class extends the UIView class. This allows PaintingView to be added as a subview of PaintingController's view. The PaintingView class has one attribute, stroke, which is an instance of the Stroke model class.
With regards to the MVC pattern, when working with the iOS SDK, it's acceptable for a view to know about a model, but it's not okay for a model to know about a view.
In the PaintingView class, we've overridden UIView's drawRect: method. This method allows you to implement custom drawing code. The first line of this method, super, calls the method on the super class, UIView in this example, with the provided arguments.
In drawRect:, we also check that the stroke attribute isn't nil. This prevents errors if stroke hasn't been set yet. We then fetch the current drawing context by invoking UIGraphicsGetCurrentContext, configure the stroke that we're about to draw, move the drawing context to the start_point of the stroke, and adds lines for each point in the stroke object. Finally, we invoke CGContextStrokePath to stroke the path, drawing it in the view.
Add an outlet to PaintingController for the painting view.
outlet :painting_view
Fire up Interface Builder by running bundle exec rake ib:open and add a UIView object to the PaintingController's view from the Ojbect Library on the right. Set the view's class to PaintingView in the Identity Inspector. Make sure that the painting view is positioned underneath the buttons you added earlier. You can adjust the ordering of the subviews by changing the positions of the view's in the view hierarchy on the left.
Control and drag from the view controller to the PaintingView and select the painting_view outlet from the menu that appears.
Select the painting view and set its background color to 250 red, 250 green, and 250 blue.
Don't forget to add a spec to spec/controllers/painting_controller_spec.rb for the painting_view outlet.
describe "#painting_view" do
it "is connected in the storyboard" do
controller.painting_view.should.not.be.nil
end
end
To make sure your drawing code works correctly, add the following code snippet to the PaintingController class and run your application. You can delete this code snippet when you've verified everything is working as expected.
Now that you can draw a stroke, it's time to level up to the entire painting. Let's start with the Painting model. Create a file for the class at app/models/painting.rb and implement the Painting class.
class Painting
attr_accessor :strokes
def initialize
@strokes = []
end
def start_stroke(point, color)
strokes << Stroke.new(point, color)
end
def continue_stroke(point)
current_stroke.add_point(point)
end
def current_stroke
strokes.last
end
end
The Painting model is similar to the Stroke class. The constructor initializes strokes to an empty array. When a person touches the screen, the application will start a new stroke by calling start_stroke. Then, as the user drags their finger, it will add points with continue_stroke. Don't forget the specs for the Painting class.
describe Painting do
before do
@point1 = CGPoint.new(10, 60)
@point2 = CGPoint.new(20, 50)
@point3 = CGPoint.new(30, 40)
@point4 = CGPoint.new(40, 30)
@point5 = CGPoint.new(50, 20)
@point6 = CGPoint.new(60, 10)
@painting = Painting.new
end
describe "#initialize" do
before do
@painting = Painting.new
end
it "sets the stroke to an empty array" do
@painting.strokes.should == []
end
end
describe "#start_stroke" do
before do
@painting.start_stroke(@point1, UIColor.redColor)
@painting.start_stroke(@point2, UIColor.blueColor)
end
it "starts new strokes" do
@painting.strokes.length.should == 2
@painting.strokes[0].points.should == [ @point1 ]
@painting.strokes[0].color.should == UIColor.redColor
@painting.strokes[1].points.should == [ @point2 ]
@painting.strokes[1].color.should == UIColor.blueColor
end
end
describe "#continue_stroke" do
before do
@painting.start_stroke(@point1, UIColor.redColor)
@painting.continue_stroke(@point2)
@painting.start_stroke(@point3, UIColor.blueColor)
@painting.continue_stroke(@point4)
end
it "adds points to the current strokes" do
@painting.strokes[0].points.should == [ @point1, @point2 ]
@painting.strokes[1].points.should == [ @point3, @point4 ]
end
end
end
Next, modify the PaintingView class to draw a Painting object instead of a Stroke object.
class PaintingView < UIView
attr_accessor :painting
def drawRect(rectangle)
super
# ensure the painting is provided
return if painting.nil?
painting.strokes.each do |stroke|
draw_stroke(stroke)
end
end
def draw_stroke(stroke)
# set up the drawing context
context = UIGraphicsGetCurrentContext()
CGContextSetStrokeColorWithColor(context, stroke.color.CGColor)
CGContextSetLineWidth(context, 20.0)
CGContextSetLineCap(context, KCGLineCapRound)
CGContextSetLineJoin(context, KCGLineJoinRound)
# move the line to the start point
CGContextMoveToPoint(context, stroke.start_point.x, stroke.start_point.y)
# add each line in the path
stroke.points.drop(1).each do |point|
CGContextAddLineToPoint(context, point.x, point.y)
end
# stroke the path
CGContextStrokePath(context);
end
end
You've changed the stroke attribute to painting. The drawRect: method now iterates over all of the strokes in the painting and draws each one using draw_stroke, which contains the drawing code you wrote previously.
You also need to update the view controller to contain a Painting model. At the top of the PaintingController class, add attr_reader :painting. As the name implies, the viewDidLoad method of the UIViewController class—the superclass of the PaintingController class—is called when the view controller has finished loading its view. The viewDidLoad method is therefore a good place to create a Painting instance and set the painting attribute of the PaintingView object.
def viewDidLoad
@painting = Painting.new
painting_view.painting = painting
end
As always, don't forget to add tests for viewDidLoad to spec/controllers/painting_controller_spec.rb.
describe "#viewDidLoad" do
it "sets the painting" do
controller.painting.should.be.instance_of Painting
end
it "sets the painting attribute of the painting view" do
controller.painting_view.painting.should == controller.painting
end
end
5. Gesture Recognizers
Your application will be pretty boring unless you allow people to draw on the screen with their fingers. Let's add that piece of functionality now. Create a file for the StrokeGestureRecognizer class along with its spec by running the following commands from the command line.
class StrokeGestureRecognizer < UIGestureRecognizer
attr_reader :position
end
The StrokeGestureRecognizer class extends the UIGestureRecognizer class, which handles touch input. It has a position attribute that the PaintingController class will use to determine the position of the user's finger.
There are four methods you need to implement in the StrokeGestureRecognizer class, touchesBegan:withEvent:, touchesMoved:withEvent:, touchesEnded:withEvent:, and touchesCancelled:withEvent:. The touchesBegan:withEvent: method is called when the user starts touching the screen with their finger. The touchesMoved:withEvent: method is called repeatedly when the user moves their finger and the touchesEnded:withEvent: method is invoked when the user lifts their finger from the screen. Finally, the touchesCancelled:withEvent: method is invoked if the gesture is cancelled by the user.
Your gesture recognizer needs to do two things for each event, update the position attribute and change the state property.
class StrokeGestureRecognizer < UIGestureRecognizer
attr_accessor :position
def touchesBegan(touches, withEvent: event)
super
@position = touches.anyObject.locationInView(self.view)
self.state = UIGestureRecognizerStateBegan
end
def touchesMoved(touches, withEvent: event)
super
@position = touches.anyObject.locationInView(self.view)
self.state = UIGestureRecognizerStateChanged
end
def touchesEnded(touches, withEvent: event)
super
@position = touches.anyObject.locationInView(self.view)
self.state = UIGestureRecognizerStateEnded
end
def touchesCancelled(touches, withEvent: event)
super
@position = touches.anyObject.locationInView(self.view)
self.state = UIGestureRecognizerStateEnded
end
end
Both the touchesEnded:withEvent: and touchesCancelled:withEvent: methods set the state to UIGestureRecognizerStateEnded. This is because it doesn't matter if the user is interrupted, the drawing should remain untouched.
In order to test the StrokeGestureRecognizer class, you need to be able to create an instance of UITouch. Unfortunately, there's no publicly available API to accomplish this. To make it work, we'll make use of the Facon mocking library.
Add gem 'motion-facon' to your Gemfile and run bundle install. Then, add require "motion-facon" below require "sugarcube-color" in the project's Rakefile.
Next, implement the StrokeGestureRecognizer spec.
describe StrokeGestureRecognizer do
extend Facon::SpecHelpers
before do
@stroke_gesture_recognizer = StrokeGestureRecognizer.new
@touch1 = mock(UITouch, :"locationInView:" => CGPoint.new(100, 200))
@touch2 = mock(UITouch, :"locationInView:" => CGPoint.new(300, 400))
@touches1 = NSSet.setWithArray [ @touch1 ]
@touches2 = NSSet.setWithArray [ @touch2 ]
end
describe "#touchesBegan:withEvent:" do
before do
@stroke_gesture_recognizer.touchesBegan(@touches1, withEvent: nil)
end
it "sets the position to the gesture's position" do
@stroke_gesture_recognizer.position.should == CGPoint.new(100, 200)
end
it "sets the state of the gesture recognizer" do
@stroke_gesture_recognizer.state.should == UIGestureRecognizerStateBegan
end
end
describe "#touchesMoved:withEvent:" do
before do
@stroke_gesture_recognizer.touchesBegan(@touches1, withEvent: nil)
@stroke_gesture_recognizer.touchesMoved(@touches2, withEvent: nil)
end
it "sets the position to the gesture's position" do
@stroke_gesture_recognizer.position.should == CGPoint.new(300, 400)
end
it "sets the state of the gesture recognizer" do
@stroke_gesture_recognizer.state.should == UIGestureRecognizerStateChanged
end
end
describe "#touchesEnded:withEvent:" do
before do
@stroke_gesture_recognizer.touchesBegan(@touches1, withEvent: nil)
@stroke_gesture_recognizer.touchesEnded(@touches2, withEvent: nil)
end
it "sets the position to the gesture's position" do
@stroke_gesture_recognizer.position.should == CGPoint.new(300, 400)
end
it "sets the state of the gesture recognizer" do
@stroke_gesture_recognizer.state.should == UIGestureRecognizerStateEnded
end
end
describe "#touchesCancelled:withEvent:" do
before do
@stroke_gesture_recognizer.touchesBegan(@touches1, withEvent: nil)
@stroke_gesture_recognizer.touchesCancelled(@touches2, withEvent: nil)
end
it "sets the position to the gesture's position" do
@stroke_gesture_recognizer.position.should == CGPoint.new(300, 400)
end
it "sets the state of the gesture recognizer" do
@stroke_gesture_recognizer.state.should == UIGestureRecognizerStateEnded
end
end
end
extend Facon::SpecHelpers makes several methods available in your specs, including mock. mock is a simple way to create test objects that work exactly the way you want them to. In the before block at the beginning of the specs, you're mocking instances of UITouch with the locationInView: method that returns a predefined point.
Next, add a stroke_gesture_changed method to the PaintingController class. This method will receive an instance of the StrokeGestureRecognizer class whenever the gesture is updated.
def stroke_gesture_changed(stroke_gesture_recognizer)
if stroke_gesture_recognizer.state == UIGestureRecognizerStateBegan
painting.start_stroke(stroke_gesture_recognizer.position, selected_color)
else
painting.continue_stroke(stroke_gesture_recognizer.position)
end
painting_view.setNeedsDisplay
end
When the gesture recognizer's state is UIGestureRecognizerStateBegan, this method starts a new stroke in the Painting object using the StrokeGestureRecognizer's position and selected_color. Otherwise, it continues the current stroke.
Add the specs for this method.
describe "#stroke_gesture_changed" do
before do
drag(controller.painting_view, :points => [ CGPoint.new(100, 100), CGPoint.new(150, 150), CGPoint.new(200, 200) ])
end
it "adds the points to the stroke" do
controller.painting.strokes.first.points[0].should == CGPoint.new(100, 100)
controller.painting.strokes.first.points[1].should == CGPoint.new(150, 150)
controller.painting.strokes.first.points[2].should == CGPoint.new(200, 200)
end
it "sets the stroke's color to the selected color" do
controller.painting.strokes.first.color.should == controller.selected_color
end
end
RubyMotion provides several helper methods to simulate user interaction, including drag. Using drag, you can simulate a user's interaction with the screen. The points option allows you to provide an array of points for the drag.
If you were to run the specs now, they would fail. That's because you need to add the gesture recognizer to the storyboard. Launch Interface Builder by running bundle exec rake ib:open. From the Object Library, drag an Object into your scene, and change its class to StrokeGestureRecognizer in the Identity Inspector on the right.
Control and drag from the StrokeGestureRecognizer object to the PaintingController and choose the select_color method from the menu that appears. This will ensure the select_color method is called whenever the gesture recognizer is triggered. Then, control and drag from the PaintingView object to the StrokeGestureRecognizer object and select gestureRecognizer from the menu that appears.
Add a spec for the gesture recognizer to the PaintingController specs in the #painting_viewdescribe block.
describe "#painting_view" do
it "is connected in the storyboard" do
controller.painting_view.should.not.be.nil
end
it "has a stroke gesture recognizer" do
controller.painting_view.gestureRecognizers.length.should == 1
controller.painting_view.gestureRecognizers[0].should.be.instance_of StrokeGestureRecognizer
end
end
That's it. With these changes your application should now allow a person to draw on the screen. Run your application and have fun.
6. Final Touches
There are a few final touches left to add before your application is finished. Because your application is immersive, the status bar is a bit distracting. You can remove it by setting the UIStatusBarHidden and UIViewControllerBasedStatusBarAppearance values in the application's Info.plist. This is easy to do in the RubyMotion setup block inside the project's Rakefile.
Motion::Project::App.setup do |app|
app.name = 'Paint'
app.info_plist['UIStatusBarHidden'] = true
app.info_plist['UIViewControllerBasedStatusBarAppearance'] = false
end
The application's icons and launch images are included in the source files of this tutorial. Download the images and copy them to the resources directory of the project. Then, set the application icon in the Rakefile configuration. You may have to clean the build by running bundle exec rake clean:all in order to see the new launch image.
Motion::Project::App.setup do |app|
app.name = 'Paint'
app.info_plist['UIStatusBarHidden'] = true
app.info_plist['UIViewControllerBasedStatusBarAppearance'] = false
app.icons = [ "icon.png" ]
end
Conclusion
That's it. You now have a complete app that's ready for a million downloads in the App Store. You can view and download the source for this application from GitHub.
Even though your app is finished, there's so much more you could add to it. You can add curves between the lines, more colors, different line widths, saving, undo, and redo, and anything else you can imagine. What will you do to make your app better? Let me know in the comments below.
In this tutorial, I'm going to show you a brand new addition to the iOS SDK that was introduced in iOS 7, UIKit Dynamics and I'll show you how it can be used to create attractive, eye-catching animation effects.
The purpose of the interface is to allow developers to add realism to their applications in an easy and straightforward fashion. In this tutorial, we’ll see a number of examples that illustrate this.
In the first part of this tutorial, I will demonstrate how to create an animated menu and in the second part we will focus on creating a customized animated alert view. We will create the menu and the alert view as standalone components to maximize reusability.
1. UIKit Dynamics Essentials
Before we start writing code, it's necessary to take a look at the essentials of the UIKit Dynamics. UIKit Dynamics is part of the UIKit framework, which means that you don't need to add any additional frameworks to your projects to use it.
It provides developers with an interface to add realistic effects to the view layer of your applications. It’s important to mention that UIKit Dynamics makes use of a physics engine to do its work. This allows developers to focus on the functionality they'd like to add to their application instead of the implementation. A basic understanding of math and physics is all you need to get started with UIKit Dynamics.
The main component of the UIKit Dynamics interface is the UIDynamicAnimator class, which is also known as the dynamic animator. This class is responsible for performing the animations using a physics engine under the hood. Even though the dynamic animator is the heart of the UIKit Dynamics interface, it can’t be used on its own.
In order to work, specific behaviors need to be added to the dynamic animator. These behaviors are a collection of physics forces that define the animation that results from adding one or more behaviors. Programmatically speaking, these dynamic behaviors are classes of the UIKit Dynamics interface and each behavior has specific attributes that can be modified to influence the animation.
The base class of these behaviors is the UIDynamicBehavior class. Although you're free to create your own custom behaviors, UIKit Dynamics comes with a number of easy-to-use subclasses that mimic common behaviors, such as gravity and collisions.
UIGravityBehavior: This UIDynamicBehavior subclass adds gravity to an item. As a result, the item moves to a certain direction defined by the gravity behavior.
UICollisionBehavior: This class defines how two items collide with one another or how an item collides with a predefined boundary, visible or invisible.
UIPushBehavior: As its name indicates, this behavior gives an item a push, it accelerates the item. The push can be continuous or instantaneous. A continuous push behavior gradually applies the force of the push while an instantaneous push behavior applies the force the moment the behavior is added.
UISnapBehavior: A snap behavior defines how an item snaps to another item or a point in space. The snap behavior is customizable in several ways. For example, the item can either snap to a point without any bounciness or jiggle for a few moments before coming to a halt.
UIAttachmentBehavior: An attachment behavior defines how two dynamic items are connected to one another or how a dynamic item is connected to an anchor point.
These are the most important dynamic behaviors currently provided by the UIKit Dynamics interface. In order for these behaviors to do their work, they need to be initialized, configured, and added to a dynamic animator object, which we covered earlier.
Combining behaviors is possible as long as they don’t cause any conflicts. Also note that some behaviors can only be added once to the dynamic animator object. For example, adding two instances of the UIGravityBehavior class to the dynamic animator will result in an exception.
The behaviors are always applied to dynamic items. A dynamic item is any object that conforms to the UIDynamicItem protocol. The great thing is that UIView and UICollectionViewLayoutAttributes classes already conform to this protocol. This means that every view in your application can leverage UIKit Dynamics.
There's one more UIDynamicBehavior subclass that's worth mentioning, UIDynamicItemBehavior. Instead of defining a specific behavior, it offers a base dynamic animation configuration that can be applied to dynamic items. It has a number of properties to define the behavior:
elasticity: This property defines the elasticity of a collision between dynamic items or a dynamic item and a boundary. The value ranges from 0.0 to 1.0 with the latter being a very elastic collision.
density: This property defines the mass of a dynamic item with a high value resulting in a heavy object. This can be useful if you don’t want a dynamic item to be moved when another item collides with it.
resistance: The property’s name speaks for itself. The property defines the velocity damping of a dynamic item.
angularResistance: This one is similar to the resistance property, but the angularResistance property defines angular velocity damping.
friction: This property defines the friction or resistance of two dynamic items that slide against each other.
allowsRotation: This one simply specifies if a dynamic item is allowed to rotate or not.
Combining dynamic behaviors can give spectacular results. In this tutorial, we'll use most of the behaviors listed above.
I encourage you to visit Apple’s official documentation and read more about the key classes of UIKit Dynamics. Also, after finishing this tutorial, it’s useful to play around with the result to get a better understanding of the concepts and the various behaviors defined by UIKit Dynamics.
2. Application Overview
I've already mentioned in the introduction that we're going to create two reusable components that will leverage UIKit Dynamics. In this tutorial, we'll create a custom animated menu. Take a look at the final result below.
This component is a UIView object that slides in and out of the screen. To make the menu appear and disappear, a simple swipe gesture is used. The items of the menu are listed in a plain table view. The animation you see in the above screenshot is achieved by combining a number of dynamic behaviors.
3. Creating the Project
Start by launching Xcode and create a new project. Select the Single View Application template in the Application category of the iOS section. Click Next to continue.
Name the project DynamicsDemo and make sure that Devices is set to iPhone. For this tutorial, I've left the Class Prefix field empty, but feel free to enter your own class prefix. Click Next , tell Xcode where you'd like to store the project, and hit Create.
Download the source files of this tutorial and add the images from the Xcode project to your project in the Images.xcassets folder in the Project Navigator.
Before we begin implementing the menu component, it’s important to take a closer look at how it should operate. As I've already mentioned, the menu should show itself by sliding into view from the right to the left and hide itself by sliding out of view from left to right.
Showing and hiding the menu is triggered by a swipe gesture. UIKit Dynamics will be responsible for the behavior or animation of the menu. The behaviors we'll use are:
Gravity behavior: The gravity behavior will make the menu move in the right direction, left or right. Without the gravity behavior, the menu won't do much.
Collision behavior: The collision behavior is equally important. Without it, the menu wouldn't stop moving once gravity was applied to it. An invisible boundary will trigger a collision and make the menu stop where we want it to stop.
Push behavior: Even though the gravity and collision behaviors can animate the menu in and out, we will give it an extra push or acceleration using a push behavior. This will make the animation snappier.
Dynamic item behavior: We will also add a dynamic item behavior for defining the elasticity of the menu. This will result in a bouncy collision.
Instead of instantiating the above behaviors every time they need to take effect, we will initialize them once. We apply and configure the behaviors when we need them, which will depend on the position of the menu and the direction it needs to move in.
5. Properties, Structures, and Initialization
Step 1: Creating the Class
Let’s begin by adding a new class to the project. Press Command-N, select Objective-C class from the list of templates in the iOS > Cocoa Touch section. Click Next to continue.
Name the new class MenuComponent and make sure the class inherits from NSObject. Click Next, tell Xcode where you want to save the class files, and hit Create.
Step 2: Declaring Properties
Open MenuComponent.h and define an enumeration that will represent the menu direction. The direction can be left-to-right or right-to-left. Add the following code snippet below the import statement.
menuBackgroundColor: This property is used to set the menu’s background color.
tableSettings: This is the dictionary I mentioned in the previous section. It will let us configure the table view by setting a number of options.
optionCellHeight: This is the only table view attribute that cannot be set using the tableSettings dictionary. It specifies the row height of the table view's cells.
acceleration: This property specifies the magnitude of the push behavior, in other words, the amount of force applied to the menu view when it's swiped into and out of view.
Step 3: Implementing the Initialization Method
In this step, we declare a custom initializer in which we set:
the final frame of the menu view, that is, the frame when the animation is completed and the menu is visible
the target view to which the menu view will be added as a subview
the array of options and images displayed by the table view
the direction of the animation when the menu is shown
The interface of the custom initializer is shown below. Add it to the public interface of the MenuComponent class.
Let's take a look at the implementation of the custom initializer. Add the following code snippet to the implementation file of the MenuComponent class. The implementation is pretty straightforward as you can see below.
There are a number of properties that need to be initialized and I think it's therefore a good idea to group them together in a few private methods. Navigate to the class extension of the MenuComponent class and declare the following method:
In setupMenuView, we set up the menu view. Because the menu initially needs to be out of the screen’s visible area, we start by calculating the menu's initial frame. We then initialize the menu view with its initial frame, set its background color, and add it as a subview to the target or parent view.
In setupBackgroundView, we set up the background view. Note that the alpha value of the background view is initially set to 0.0. We update this value the moment the menu appears.
In setupOptionsTableView, we start by initializing the table by invoking initWithFrame: in which the menu view’s size is used for the table view's size. The MenuComponent instance is set as the table view's data source and delegate.
By setting the the MenuComponent instance as the table view's data source and delegate, the compiler tells us that the MenuComponent class doesn't conform to the UITableViewDataSource and UITableViewDelegate protocols. Let's fix this by updating the header file of the MenuComponent class as shown below.
Two things are worth pointing out. First, the direction of the swipe gesture depends on the value of the menuDirection property. Second, the hideMenuWithGesture: method is a private method that is invoked every time the gesture recognizer detects a swipe gesture. We'll implement this method later, but, to get rid of the compiler warning, declare the method in the private class extension of the MenuComponent class as shown below.
Let's take advantage of the work we just did by invoking the helper methods in the initialization method as shown below.
- (id)initMenuWithFrame:(CGRect)frame targetView:(UIView *)targetView direction:(MenuDirectionOptions)direction options:(NSArray *)options optionImages:(NSArray *)optionImages {
if (self = [super init]) {
...
// Setup the background view.
[self setupBackgroundView];
// Setup the menu view.
[self setupMenuView];
// Setup the options table view.
[self setupOptionsTableView];
// Set the initial table view settings.
[self setInitialTableViewSettings];
// Setup the swipe gesture recognizer.
[self setupSwipeGestureRecognizer];
}
return self;
}
Note that the background view is set up before menu view is to make sure the background view is positioned below the menu view.
Finally, initialize the remaining properties after invoking the helper methods as shown below.
- (id)initMenuWithFrame:(CGRect)frame targetView:(UIView *)targetView direction:(MenuDirectionOptions)direction options:(NSArray *)options optionImages:(NSArray *)optionImages {
if (self = [super init]) {
...
// Initialize the animator.
self.animator = [[UIDynamicAnimator alloc] initWithReferenceView:self.targetView];
// Set the initial height for each cell row.
self.optionCellHeight = 50.0;
// Set the initial acceleration value (push magnitude).
self.acceleration = 15.0;
// Indicate that initially the menu is not shown.
self.isMenuShown = NO;
}
return self;
}
7. Dynamic Behaviors
Every time the menu view appears or disappears, we use the same dynamic behaviors. We will therefore reuse those dynamic behaviors. The only thing that differs is the direction of the menu during the animation.
We're going to create one method in which we'll initialize and apply the necessary dynamic behaviors and we’ll call this method every time the menu state needs to change. Start by updating the class's private class extension with the toggleMenu method.
I already mentioned that the dynamic behaviors that will be used to animate the menu are gravity, collision, and push. Each of these behaviors has one or more properties that, when modified, determine the direction of the animation.
The gravity behavior has a direction property, a CGVector structure that specifies the direction of the gravity. For example, by setting the direction property to { 1.0, 0.0 } the gravity behavior pulls to the right whereas a value of { 0.0, 1.0 } results in a force that pulls towards the bottom.
The collision behavior works either between two dynamic items or between a dynamic item and a boundary. In our example, we need an invisible boundary defined by two points that stops the menu view.
The push behavior has a magnitude property defining the acceleration applied to a dynamic item. The value also determines the direction of the push.
In the toggleMenu method, we first calculate the values of the aforementioned properties. We then create and configure the dynamic behaviors, and add them to the dynamic animator object.
It’s important to emphasize that the values of isMenuShown, indicating whether the menu is currently shown or not, and menuDirection, specifying the direction of the animation, determine the values of the aforementioned properties.
The animator object has an array that contains every dynamic behavior that has been added to it. Every time the toggleMenu method is called, the existing dynamic behaviors need to be removed by clearing the array, because some dynamic behaviors cannot be added twice to the dynamic behavior, such as the gravity behavior.
Let's start implementing the toggleMenu method.
- (void)toggleMenu{
// Remove any previous behaviors added to the animator.
[self.animator removeAllBehaviors];
// The following variables will define the direction of the menu view animation.
// This variable indicates the gravity direction.
CGFloat gravityDirectionX;
// These two points specify an invisible boundary where the menu view should collide.
// The boundary must be always to the side of the gravity direction so as the menu view
// can stop moving.
CGPoint collisionPointFrom, collisionPointTo;
// The higher the push magnitude value, the greater the acceleration of the menu view.
// If that value is set to 0.0, then only the gravity force will be applied to the
// menu view.
CGFloat pushMagnitude = self.acceleration;
}
The gravityDirectionX, collisionPointFrom, collisionPointTo, and pushMagnitude variables will hold the values that we'll assign to the dynamic behaviors later. Let’s set their values, depending on the menu's state and the animation direction.
- (void)toggleMenu{
...
// Check if the menu is shown or not.
if (!self.isMenuShown) {
// If the menu view is hidden and it's about to be shown, then specify each variable
// value depending on the animation direction.
if (self.menuDirection == menuDirectionLeftToRight) {
// The value 1.0 means that gravity "moves" the view towards the right side.
gravityDirectionX = 1.0;
// The From and To points define an invisible boundary, where the X-origin point
// equals to the desired X-origin point that the menu view should collide, and the
// Y-origin points specify the highest and lowest point of the boundary.
// If the menu view is being shown from left to right, then the collision boundary
// should be defined so as to be at the right of the initial menu view position.
collisionPointFrom = CGPointMake(self.menuFrame.size.width, self.menuFrame.origin.y);
collisionPointTo = CGPointMake(self.menuFrame.size.width, self.menuFrame.size.height);
}
else{
// The value -1.0 means that gravity "pulls" the view towards the left side.
gravityDirectionX = -1.0;
// If the menu view is being shown from right to left, then the collision boundary
// should be defined so as to be at the left of the initial menu view position.
collisionPointFrom = CGPointMake(self.targetView.frame.size.width - self.menuFrame.size.width, self.menuFrame.origin.y);
collisionPointTo = CGPointMake(self.targetView.frame.size.width - self.menuFrame.size.width, self.menuFrame.size.height);
// Set to the pushMagnitude variable the opposite value.
pushMagnitude = (-1) * pushMagnitude;
}
// Make the background view semi-transparent.
[self.backgroundView setAlpha:0.25];
}
else{
// In case the menu is about to be hidden, then the exact opposite values should be
// set to all variables for succeeding the opposite animation.
if (self.menuDirection == menuDirectionLeftToRight) {
gravityDirectionX = -1.0;
collisionPointFrom = CGPointMake(-self.menuFrame.size.width, self.menuFrame.origin.y);
collisionPointTo = CGPointMake(-self.menuFrame.size.width, self.menuFrame.size.height);
// Set to the pushMagnitude variable the opposite value.
pushMagnitude = (-1) * pushMagnitude;
}
else{
gravityDirectionX = 1.0;
collisionPointFrom = CGPointMake(self.targetView.frame.size.width + self.menuFrame.size.width, self.menuFrame.origin.y);
collisionPointTo = CGPointMake(self.targetView.frame.size.width + self.menuFrame.size.width, self.menuFrame.size.height);
}
// Make the background view fully transparent.
[self.backgroundView setAlpha:0.0];
}
}
The above code snippet is pretty simple to understand, despite its size. The comments in the code will help you understand what's going on. Note that the background view’s alpha value is determined by whether the menu is about to be shown or hidden.
It's time to add the dynamic behaviors. Let’s start with the gravity behavior.
You may have noticed that the initialization methods of the gravity and collision behaviors accept an array that contains the dynamic items to which the behavior will be applied. In our example, there is only the menu view.
Before we add the push behavior to the dynamic animator, let’s first create a dynamic item behavior, which is, simply put, a general purpose behavior that we'll use for setting the elasticity of the collision. The greater the elasticity value, the greater the bounciness of the collision between the view and the invisible boundary. The accepted values range from 0.0 to 1.0.
We could have set more behavior attributes using the itemBehavior object, such as the resistance or the friction. You can play around with these properties later if you want.
Let's now add the push behavior that will accelerate the menu.
By setting mode to UIPushBehaviorModeInstantaneous the force of the push behavior is applied all at once instead of gradually. The dynamic behaviors have now been added to the dynamic animator, which means it's time to make the menu view appear and disappear.
8. Showing and Hiding the Menu
Step 1: showMenu
To show and hide the menu, we need to invoke the private toggleMenu method. The menu should appear when a swipe gesture is detected by the target view in the direction of the swipe. Let's start by declaring a public method that we can invoke to show the menu. Open ViewController.h and declare the showMenu method as shown below.
The implementation is very simple, because all we do is invoke the toggleMenu method. Note that we update isMenuShown to reflect the new state of the menu.
Before we initialize the menu, let's create the gesture recognizer that will detect the swipe gestures to show and hide the menu. We do this in the view controller's viewDidLoad method.
The showMenu: method triggered by the gesture recognizer is a private helper method that we'll implement shortly. Note that the gesture direction is set to UISwipeGestureRecognizerDirectionLeft, which means that we want the menu to animate in from the right to the left.
With the gesture recognizer initialized, it's time to create the menu using the initializer we implemented earlier in the MenuComponent class. Add the following code snippet to the view controller's viewDidLoad method.
Before running the app for the first time, revisit MenuComponent.m and comment out the invocation of the setupOptionsTableView method in the initialization method. This is necessary if we want to prevent our application from crashing, because we haven't implemented the table view protocols yet.
// Setup the options table view.
// [self setupOptionsTableView];
Run the application and swipe from the right to the left. At this point, the menu should appear, but we can't make it disappear and the menu doesn't display any options yet.
Step 3: Hiding the Menu
Even though the swipe gesture for showing the menu is performed on the view to which the menu is added, the gesture for hiding the menu needs to be detected by the menu itself. Do you remember that we created a gesture recognizer in the MenuComponent class and declared hideMenuWithGesture:? The latter method will be invoked when the menu needs to be hidden.
Let’s implement hideMenuWithGesture: in MenuComponent.m. Its implementation is pretty simple as you can see below.
- (void)hideMenuWithGesture:(UISwipeGestureRecognizer *)gesture {
// Make a call to toggleMenu method for hiding the menu.
[self toggleMenu];
// Indicate that the menu is not shown.
self.isMenuShown = NO;
}
If you run the application now, you should be able to show and hide the menu with a swipe gesture.
9. Menu Options
The menu view is now capable of appearing and disappearing. It's time to focus our attention on setting up the options table view and displaying the menu options. We've already declared a few options during the menu's initialization in the view controller's viewDidLoad method. Let's see what we need to do to display them.
The first thing we need to do is implement the required method of the UITableViewDataSource and UITableViewDelegate protocols. Note that the cell height is specified by the optionCellHeight property.
In tableView:cellForRowAtIndexPath:, we configure the cells to display the menu options and images. Note that we use the tableSettings object to configure the table view cells.
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath {
UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:@"optionCell"];
if (cell == nil) {
cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:@"optionCell"];
}
// Set the selection style.
[cell setSelectionStyle:[[self.tableSettings objectForKey:@"selectionStyle"] intValue]];
// Set the cell's text and specify various properties of it.
cell.textLabel.text = [self.menuOptions objectAtIndex:indexPath.row];
[cell.textLabel setFont:[self.tableSettings objectForKey:@"font"]];
[cell.textLabel setTextAlignment:[[self.tableSettings objectForKey:@"textAlignment"] intValue]];
[cell.textLabel setTextColor:[self.tableSettings objectForKey:@"textColor"]];
// If the menu option images array is not nil, then set the cell image.
if (self.menuOptionImages != nil) {
[cell.imageView setImage:[UIImage imageNamed:[self.menuOptionImages objectAtIndex:indexPath.row]]];
[cell.imageView setTintColor:[UIColor whiteColor]];
}
[cell setBackgroundColor:[UIColor clearColor]];
return cell;
}
Before running the application, make sure to uncomment the call to setupOptionsTableView.
// Setup the options table view.
[self setupOptionsTableView];
That’s it. Run the application and swipe from right to left to show the menu and its options.
10. Handling User Selection
As I mentioned earlier, we won't be implementing a delegate protocol for the options menu. Instead, we'll make use of blocks. We won't be adding any new methods though. Instead, we're going to slightly modify the showMenu method. Start by updating its declaration in MenuComponent.h as shown below.
The showMenu: method now accepts one argument, a block. The block also accepts one argument, the user's selection.
This also means that we need to store the block, because we need to invoke it when the user selects an option from the menu. In MenuComponent.m, declare a private property for storing the block.
We start by storinghandler in the selectionHandler property. Whenever the user selects an item from the table view, we invoke the selection handler. Take a look at the implementation of tableView:didSelectRowAtIndexPath: for clarification.
Finally, we need to update the view controller to use the new showMenu: method. Update the view controller's showMenu: method, which is invoked when a swipe gesture is detected, with the one shown below. To make sure that everything is working, we show an alert view displaying the user's selection.
Run the application once again to make sure everything works as expected. The menu component class is ready.
Conclusion
In this tutorial, we've created a reusable component for displaying a menu using UIKit Dynamics. If UIKit Dynamics was new for you, then I hope this tutorial has given you a taste of what you can do with it. Feel free to play around with demo application to become more familiar with UIKit Dynamics.
Stay tuned for the next tutorial in which we'll create another reusable component, a custom alert view.
Creating applications with flexible layouts has become essential, especially since the release of the iPhone 5 with its 4" screen and the introduction of Dynamic Type in iOS 7, allowing users to change text size across the operating system. Flexible layouts also come in handy with internationalization in mind.
1. What is it?
Auto Layout, which was introduced in iOS 6, enables you to create such flexible layouts. It's a great alternative to autoresizing masks or manually laying out the application's user interface.
Auto Layout enables you to add constraints to views and define the relationships between views. The relation can be between a view and its superview, one of its siblings, or even in relation to itself.
Instead of explicitly specifying a view's frame, Auto Layout lets you define the spacing between and relative positioning of two views using constraints. Auto Layout uses those constraints to calculate the runtime positions of the user interface elements.
You have to set enough constraints on the view to prevent ambiguity about the layout. It is also possible to set too many constraints, which can cause conflicts and make the application crash.
In Xcode 4, whenever you set incomplete or invalid constraints on a view, Interface Builder would replace them with new constraints that mostly did not give you the effect you were after. This led to significant frustration with developers. In Xcode 5, though, it's much easier to use Auto Layout. Xcode no longer forces constraints on a view, instead you get hints and warnings when a view's constraints are invalid.
While it is possible to work with Auto Layout programmatically, this tutorial will be looking at how to use Interface Builder to create layouts using Auto Layout.
2. Auto Layout Basics
For a simple demonstration of what Auto Layout can do for you, we'll create a simple application and set some constraints on its views. Create a new Xcode project, choose the Single View Application template and set Devices to iPhone.
Storyboards and XIB files created with Xcode 4.5 or later have Auto Layout enabled by default. You can disable it in the File Inspector on the right by unchecking the checkbox labeled Use Auto Layout.
A good reason for disabling Auto Layout is supporting iOS 5 or lower. Auto Layout is only supported by iOS 6 and above. But other than that, Apple recommends to use Auto Layout as it makes the creation of flexible user interfaces faster and easier.
Open the project's main storyboard, Main.storyboard, add a text view to the View Controller Scene, and position it as shown below.
No constraints have been set on the text view and this has some implications. When you run the application, the text view is positioned just like in Interface Builder. However, when the device is rotated to landscape mode, the text view continues to stick to the left edge of the view and its width is fixed.
At build time, constraints are automatically generated for views that don't have constraints, which explains the behavior that we're seeing. The constraints added to the text view, for example, are a left and top constraint that pin the text view to the top left, and a width and height constraint that fix the text view's size.
Once you start defining constraints, however, it's up to you to make sure that the constraints for a view don't cause conflicts. In the next section, we add a few constraints to the text view to adjust its position and size when the device is rotated or when we run the application on, for example, an iPad that has a larger screen.
3. Adding Constraints
There are several ways layout constraints can be added to a view.
Control and Drag
Hold down the Control key and drag from the view you want to add the layout constraint to to another view. When you release the mouse, a menu with options should appear. The options depend on the direction and the view you dragged to.
To illustrate this, drag from the text view to the top of the view controller's view. Xcode will highlight both views to indicate the layout constraint includes both views. When you release the mouse, the menu shows the layout constraints that can be added to the source view, the text view. To center the text view horizontally in the view controller's view, select Center Horizontally In Container from the menu. An orange line appears as a result, signifying the layout constraint you just added.
Auto Layout Menu
You can also add and edit layout constraints using the Auto Layout menu at the bottom of the Interface Builder workspace.
Beginning from the left, the menu allows you to align and pin views, resolve Auto Layout issues, and the the resizing behavior for the selected view. Let me explain what each menu option does.
Align creates alignment constraints that let you center a view in its container or align the edges of two views.
Pin creates spacing constraints. You can set the height and width of the selected view or specify the view's distance to another view.
The Resolving Auto Layout Issues menu adds the ability to resolve Auto Layout issues, for example by updating the view's frame or add missing constraints.
The Resizing menu lets you specify the resizing behavior of the selected view and how siblings and descendants are affected.
Editor Menu
Each of the aforementioned menu options can also be found in Xcode's Editor menu.
Adding Constraints
To add layout constraints to the text view, select the view in Xcode, hold down the Control key, and drag from the text view to the top of the view controller's view. Select Center Horizontally In Container from the menu that appears. This adds a layout constraint that ensures the text view is always centered in the view controller's view, regardless of the device's orientation.
You may have noticed that the text view has an orange outline. Xcode tells us that the text view's layout constraints are invalid or incomplete. We've specified that the text view should be centered horizontally in its parent view, but the Auto Layout system doesn't know what size the text view should be. Let's add a few more constraints until the text view's outline turns blue to indicate the text view's layout constraints are valid.
Note that it's possible to ignore the warnings and run an application with incomplete layout constraints. However, you should never ship an application with ambiguous layout constraints, because you don't know for sure what the application's user interface will look like on different devices in different orientations.
With the text view selected, Control-Drag from the text view to the top of the view controller's view and select Top Space to Top Layout Guide. This sets a vertical space constraint from the view controller's top layout guide to the text view's top.
Next, Control-Drag from the text view to the view controller's view and select Leading Space to Container to set the distance from the parent view to the left of the text view. Control-Drag from the text view to the view controller's view and select Bottom Space to Bottom Layout Guide to set a vertical space constraint from the view controller's bottom layout guide to the text view's bottom.
The text view's outline should be blue, indicating the layout constraints of the text view are valid and complete. Run the application in the iOS Simulator and change its orientation to inspect the result.
Note that we didn't need to add a horizontal space constraint to specify the distance from the text view's right edge and its superview, because we specified the text view's leading space and centered the text view horizontally in its superview. The Auto Layout system has enough information to correctly lay out the text view. We can accomplish the same result by specifying four space constraints and omitting the alignment constraint.
This example has showed you how to set layout constraints between a view and its parent view. Let's look at another example in which we set layout constraints between sibling elements.
Begin by deleting the text view. This will also delete the text view's layout constraints. Add a text field, a slider, and a segmented control to the view controller's view as shown below.
When you run the application without setting any constraints, the three elements will stick to the left edge of their parent view in landscape.
However, we want the elements to fill the screen's full width as shown below. The text field should expand horizontally and the slider should also expand to take advantage of the screen's width. The segmented control, however, should have a fixed width.
Select the text field and click the Pin button of the Auto Layout menu at the bottom. In the section Spacing to nearest neighbor at the top of the menu, click the top, right, and left lines that surround the square. The lines should turn red as a result. Next, click the button at the bottom labelled Add 3 Constraints to add the specified space constraints.
Select the slider and repeat the same steps by setting a top, left, and right space constraint. This ensures the distance between the slider and the text field and the slider and the segmented control is fixed.
Repeat the same steps for the segmented control, but only add a top and right (trailing) space constraint. In addition, check the Width checkbox and click the Add 3 Constraints button at the bottom. We don't want the segmented control to expand when the screen size changes, which is why we give it a fixed width.
4. Fixing Auto Layout Issues
Fixing Issues
When Xcode gives us errors or warnings about missing or invalid layout constraints, it may not always been clear what constraints need to be added or updated. Xcode helps us by showing us which constraints are missing in the Document Outline.
When a layout is invalid or incomplete, a red arrow is visible in the Document Outline. When you click the arrow, a window slides in from the right showing which constraints are missing or invalid. This gives you a clue how to fix the layout.
On the right of each error or warning is a red circle (error) or a yellow triangle (warning). When you click the error or warning, a menu appears with suggestions to fix the problem.
You can also use the Resolve Auto Layout Issues menu to add missing constraints, reset a view's constraints, or to clear constraints. Xcode will automatically add constraints to the selected view for you. This can save you time, but note that it's also possible that the resulting layout isn't what you intended.
Misplaced Views
If you've added layout constraints to a view and you change its size or position, Xcode highlights the view in orange to indicate that the current position and/or size is not in line with its layout constraints.
If you run the application, you'll see that the Auto Layout system enforces the view's layout constraints and ignores the view's new size and position you've set. This is a so-called misplaced view. The screenshot below shows a button that I moved after having specified its layout constraints.
To fix this, you can either delete the layout constraints and set new ones, or you can let Xcode fix it for you. You have two options to fix a misplaced view.
You can move and resize the view to match its layout constraints by selecting Resolve Auto Layout Issues > Update Frames from Xcode's Editor menu.
Or you can update its layout constraints to match the view's new size and position by selecting Resolve Auto Layout Issues > Update Constraints from Xcode's Editor menu.
In the above example, we select Update Constraints to update the layout constraints to the button's new size and position, because we wish to preserve the button's new size and position.
Conclusion
The Auto Layout system makes laying out user interfaces much simpler and faster. Before Auto Layout was introduced, developers had to hard code an application's user interface by setting a view's frame and autoresizing mask. With Auto Layout, this is no longer necessary.
By correctly setting a view's layout constraints, its position is automatically updated regardless of the screen size or orientation. Another area where Auto Layout is useful is application localization. Words and sentences have a different length in different languages. This too can be solved with Auto Layout.
Open source projects are everywhere, on the web, on your computer, and on your mobile phone. In this article, we'll take a look at:
the definition of open source
popular examples of open source projects
and how to get involved in an open source project
1. What is Open Source Software?
Open source software (OSS) is a type of computer software in which the source code is made publicly available and licensed in such a way that anyone can make changes and redistribute the code or executable.
Even though open source software is mostly developed and maintained by a group of people, anyone can access the code and play around with it if they want to.
I've contributed to several open source projects and contributing to an open source project is a great way to become a better developer and give back to the community. You learn from other people's code and learn to write better code yourself. Seeing an open source project you've worked on with other developers come together is one of the most rewarding feelings I've experienced as a developer.
Let's start by taking a look at some popular examples of open source software. You may be surprised by the sheer volume of open source projects and also by some of the companies behind these projects, like Google and Automattic.
2. Popular Examples
There are millions of open source projects available. Below is a list of some very popular and notable examples.
Firefox OS is the mobile operating system developed and maintained by Mozilla.
3. How It Works
An open source project typically involves three stages. Let's take a quick look at each stage.
Stage 1: Contributing
If you're wanting to get involved with an open source project, you could begin by contacting the organization behind the software itself and asking what opportunities they have going. This method will work well for smaller-scale projects, or startups, however, you should be aware that the majority of open source projects will not pay for your work and that it's done on a voluntary basis.
In the past, when I've worked on open source projects, I've had to wait weeks (and in one case, two months) before I received a response, but you should sit in there and wait for that all important confirmation email, and then you're good to go.
Alternatively, for larger-scale projects, you can simply start coding and forking your own version of the software. You should be wary though that if lots of people are working on the same project as you, that your hard work may not be included in the final release of the product, so brace yourself for rejection.
If you're looking for inspiration on a project to start working on, there's always lots of interesting projects to work on, especially if you look on sites like GitHub, SourceForge, and Google Code.
Before you begin actually working on the project, you should familiarize yourself with how the project is being run and and how its management is structured so you know who to go to if you require assistance with something. Also, it's a good idea to check you know exactly what you're doing before you begin, as you either don't want to mess things up, or waste your time working on a feature that someone else is already working on, for example.
Stage 2: Committing
When you've made the changes you want or implemented the feature you had in mind, you commit your changes to the main project and send them to the maintainers of the project for review.
This may be done using GitHub or on a platform like SourceForge. Your changes will usually receive a yes or a no from the organization or the team in charge of the project, indicating whether or not your changes are going to be included in the project. If they are, then it's time for the distribution stage. If not, then it's back to the contribution stage.
Stage 3: Distributing
Possibly the most complex stage of all is the distribution of an open source project. Here, the final version is committed to the repository where the project has been hosted and live versions for non-developers are updated. At this point, the organization and developers say good bye to their hard work and hand it over to the public for general use, and of course, critique.
In Summary
I hope you now feel more confident about open source development and how you can get involved with a project yourself. Open source can be really interesting, but also very frustrating at times. The key thing to keep in mind, though, is to keep going and to not give up when you hit an obstacle.
In the next part of this series, we'll take a look at licensing for open source projects and what some of the available options are for developers If you have any questions, I'd be happy to answer them for you in the comments below.
Android Studio is a fairly new IDE (Integrated Development Environment) made available for free by Google to Android developers. Android Studio is based on IntelliJ IDEA, an IDE that also offers a good Android development environment. In this tutorial, I'll show you how to create a new Android project and take advantage of the features that Android Studio has to offer.
1. Project Setup
Before start exploring Android Studio, you'll first need to download and install it. Note that you need to have JDK 6 or higher installed. If you're on Windows, launch the .exe file and follow the steps of the setup wizard. If you're running OS X, mount the disk image by double-clicking it and drag Android Studio to your Applications folder.
If you've successfully completed the above steps, then your development environment should be set up correctly. You're now ready to create your first Android application using Android Studio. When you launch Android Studio for the first time, you should be presented with a welcome screen, offering you a number of choices to get you started.
In this tutorial, we're going to choose the New Project option. However, you can choose Import Project if you'd like to import a project from, for example, Eclipse, into Android Studio. Android Studio will convert the Eclipse project to an Android Studio project, adding the necessary configuration files for you.
If you select Open Project from the list of options, you can open projects created with either Android Studio or IntelliJ IDEA. By choosing Check out from Version Control, you can check out a copy of a project that's under version control. This is a great way to quickly get up to speed with an existing project.
To get us started, choose New Project from the list of options. This will show you a list of options to configure your new project. In this tutorial, we're going to create a simple application to show you some of Android Studio's most important features. I'm sure you agree that there's no better name for our project than HelloWorld.
As you can see in the above screenshot, I've named my application HelloWorld and set the module name to HelloWorld. If you're unfamiliar with IntelliJ IDEA, you may be wondering what a module is. A module is a discrete unit of functionality that can be compiled, run, tested, and debugged independently. Modules contain source code, build scripts, and everything else required for their specific task.
When creating a new project, you can also set the package name of the project. By default, Android Studio sets the last element of the project's package name to the name of the module, but you can change it to whatever you want.
The other settings are the project's location on your machine, the minimum and target SDK, the SDK your project will be compiled with, and the project's theme. You can also tell Android Studio to create an Activity class and a custom launch icon for you, and whether the project supports GridLayout, Fragments, a Navigation Drawer, or an Action Bar.
We won't create a custom icon for this application so you can uncheck the checkbox labeled Create custom launch icon. Click Next to continue setting up your project.
Because we checked the checkbox Create activity in the previous step, you are asked to configure the Activity class Android Studio will create for you.
Since we'll be starting with a blank Activity class, you can click Next to proceed to the next step in the setup process in which you're asked to name the Activity class, the main layout, and the fragment layout. You can also set the navigation type, which we'll leave at None for this project. Take a look at the next screenshot to see what your settings should look like.
After clicking Finish, you'll be presented with Android Studio's user interface with the project explorer on the left and the workspace on the right. With your project set up in Android Studio, it's time to explore some of the key features of Android Studio.
2. Android Virtual Devices
An Android Virtual Device or AVD is an emulator configuration, allowing you to model an Android device. This makes running and testing applications on a wide range of devices much easier. With an Android Virtual Device, you can specify the hardware and software the Android Emulator needs to emulate.
The preferred way to create an Android Virtual Device is through the AVD Manager, which you can access in Android Studio by selecting Android > AVD Manager from the Tools menu.
If you're development environment is set up correctly, the Android Virtual Device Manager should look similar to the screenshot below.
To create a new AVD, click New... on the right, give the AVD a name, and configure the virtual device as shown below. Click OK to create your first AVD.
To use your newly created AVD, select it from the list in the AVD manager, and click Start... on the right. If your AVD is set up correctly, the Android Emulator should launch as shown in the screenshot below.
With the Android Emulator up and running, it's time to launch your application by selecting Run 'helloworld' from the Run menu. That's how easy it is to run an application in the Android Emulator.
3. Live Layout
Android Studio's live layout feature lets you preview your application's user interface without the need to run it on a device or the emulator. The live layout feature is a powerful tool that will literally save you hours. Viewing your application's user interface is much faster using live layouts.
To work with live layouts, double-click the XML layout file and select the Text tab at the bottom of the workspace. Select the Preview tab on the right of the workspace to preview the current layout. Any changes you make to the XML layout will be reflected in the preview on the right. Take a look at the screenshot below to get a better idea of this neat feature.
There are a number of other advantages of the live layout feature that are worth pointing out. You can, for example, create a variation of the XML layout you're currently working on by selecting an option from the first menu in the Preview pane. You can, for example, create separate views for portrait and landscape and Android Studio will create the necessary folders and files for you.
The second menu in the Preview pane lets you change the size of the device shown in the Preview pane. The third menu lets you change the orientation of the device shown in the Preview pane, which makes it easy to see how a layout looks in different orientations and using different themes.
The fourth menu in the Preview pane gives you easy access to the Activity or fragment in which the layout is used. The Preview pane also lets you change the language used in the live layout to make it easy to preview a layout in different languages. The rightmost menu lets you change the API version.
The Preview pane also includes controls to zoom in on the layout, refresh the Preview pane, or take a screenshot.
4. Templates
Android Studio provides developers with a number of templates to speed up development. These templates automatically create an Activity and the necessary XML files. You can use these templates to create a basic Android application, which you can then run on a device or in the emulator.
With Android Studio, you can create a template when you create a new Activity. Right-click on the package name in the project navigator on the left, select New from the menu, and choose Activity from the list of options. Android Studio then shows you a list of templates, such as Blank Activity, Fullscreen Activity, and Tabbed Activity.
You can also select Image Asset from the menu, which will launch a wizard that guides you through the creation process. Let me show you how to create a new Activity based on the Login Activity template. Select the Login Activity option from the list of Activity templates to fire up the wizard.
As you can see in the above screenshot, I've named the ActivityLoginActivity, set the Layout Name to activity_login, given the Activity a title of Sign In. The checkbox labeled Include Google+ sign in is checked by default. Uncheck it since we won't be using this feature in our example.
You can optionally set the Hierarchical Parent of the new Activity. This will let you navigate back if you tap the device's back button. We will leave this field empty. After clicking Finish, Android Studio creates the necessary files and folders for you. If all went well, you should see a new Activity and Layout in your project.
The next step is to set up the new Activity in the manifest file so it's used as the main Activity when the application launches. As you can see in manifest file below, the LoginActivity class has its own activity node.
To make your application launch the LoginActivity you created, remove the activity node for the LoginActivity class and replace com.tuts.HelloWorld.MainActivity
with com.tuts.HelloWorld.LoginActivity. The result is that the application will now use the LoginActivity class as its main Activity.
When you build and run your application in the emulator, you should see a screen similar to the one shown below. This means that we've successfully replaced the blank Activity class with the newly created LoginActivity class.
5. Lint Tools
Testing your code is one thing, but it's equally important to apply best practices when writing code. This will improve performance and the overall stability of your application. It's also much easier to maintain a properly structured project.
Android Studio's includes Android Lint, a static analyzer that analyzes your project's source code. It can detect potential bugs and other problems in your code that are the compiler may overlook.
The below screenshot, for example, tells us that the LinearLayout in this layout is of no use. The nice thing about Android Lint is that it gives you a reason for the warning or error, which makes it easier to fix or resolve.
It's good practice to run Android Studio's lint tool from time to time to check your project for potential problems. The lint tool will even tell you if you have duplicate images or translations.
To run the lint tool, select Inspect Code… from the Analyze menu in Android Studio to start the process. When Android Studio has finished inspect your project, it will present you with the results at the bottom of the window. Note that in addition to Android Lint, Android Studio performs a number of other checks as well. Simply double-click an issue to navigate to the file in which the problem is located.
6. Rich Layout Editor
Android Studio has a rich layout editor in which you can drag and drop user interface components. You can also preview layouts on multiple screen configurations as we saw earlier in this tutorial.
The rich layout editor is very straightforward to use. We first need a layout to work with. Navigate to the layout folder in your project's res folder, right-click the layout folder, and select New> Layout resource file from the menu that appears.
Give the new layout a name, set its root element, and click OK. Android Studio will automatically open the layout in the editor on the right.
At the bottom of the editor, you should see two tabs, Design and Text. Clicking the Text tab brings up the editor, allowing you to make changes to the currently selected layout.
Clicking the Design tab brings up another editor that shows you a preview of the layout. To add a widget to the layout, drag it from the list of widgets on the left to the layout on the right. It's that simple.
Conclusion
In this tutorial, we've taken a brief look at some of the key features of Android Studio. It is very similar to IntelliJ IDEA, but it contains a number of important enhancements that make Android development easier, faster, and more enjoyable.
In the first tutorial of this short series on UIKit Dynamics, we learnt the basics of the API by creating an animated menu component. In this tutorial, we'll continue working on our project and implement another animated component, a custom alert view.
1. Overview
The default alert view on iOS is great, but it's not very customizable in terms of appearance and behavior. If you need an alert view that is customizable, then you need to create your own solution and that's what we'll do in this tutorial. The focus of this tutorial is on the behavior of the alert view and not so much on its functionality. Let's see what the result is that we're after
The alert view will be a UIView instance to which we'll add the following subviews:
a UILabel object for displaying the alert view's title
a UILabel object for displaying the alert view's message
one or more UIButton instances for letting the user interact with the alert view
We'll use the UISnapBehavior class to present the alert view. As its name indicates, this UIDynamicBehavior subclass forces a dynamic item to snap to a point as if it were magnetically drawn to it.
The UISnapBehavior class defines one additional property, damping, that defines the amount of oscillation when the dynamic item has reached the point to which it is attracted.
We'll use a gravity behavior, in combination with a collision and push behavior, to dismiss the alert view. Remember that we already used these behaviors in the previous tutorial.
The alert view will animate in from the top of the screen. When the alert view is about to appear, the snap behavior will make it drop into view and snap to the center of the screen. To dismiss the alert view, a push behavior will briefly push it to the bottom of the screen and a gravity behavior will then pull it to the top of the screen and make it animate off-screen.
We'll create a custom initialization method for the alert view component that accepts the alert's title, message, button titles, and its parent view. We won't be implementing a delegate protocol for the alert view. Instead, we'll make use of blocks, which makes for a more elegant and modern solution. The block or handler will accept two parameters, the index and the title of the button the user tapped.
We'll also display a semi-transparent view behind the alert view to prevent the user from interacting with its parent view as long as the alert view is visible. Let's start by taking a look at the alert view's properties and the custom initializer.
2. Properties and Initialization
Step 1: Creating the Alert View Class
Press Command-N on your keyboard to create a new file and select Objective-C class from the list of iOS templates. Make it a subclass of NSObject and name it AlertComponent.
Step 2: Declaring Properties
The next step is to declare a few private properties. Open AlertComponent.m, add a class extension at the top, and declare the following properties:
The function of each property will become clear as we implement the alert component. It's time to create the component's custom initializer.
Step 3: Initialization
As I already mentioned, we're going to use a custom initializer to make working with the alert component as easy as possible. The initializer accepts four parameters, alert's title, its message, the button titles, and the view to which the alert component will be added, its parent view. Open AlertComponent.h and add the following declaration:
In this part the alert view is going to be set up, and all its subviews will be added to it. Also, the background view, as well as the dynamic animator will be set up too.
Open AlertComponent.m and declare the following private methods in the private class extension:
The method names are self-explanatory. Let's start by implementing the setupAlertView method first since most of the alert's setup takes place in this method.
Step 2: Setting Up the Alert View
In setupAlertView, we do three things:
initialize and configure the alert view
initialize and configure the alert view's labels
initialize and configure the alert view's buttons
Let's start by calculating the alert view's size and position as shown in the code snippet below.
- (void)setupAlertView {
// Set the size of the alert view.
CGSize alertViewSize = CGSizeMake(250.0, 130.0 + 50.0 * self.buttonTitles.count);
// Set the initial origin point depending on the direction of the alert view.
CGPoint initialOriginPoint = CGPointMake(self.targetView.center.x, self.targetView.frame.origin.y - alertViewSize.height);
}
We start by setting the alert view's size. To make the alert view dynamic, we add 50.0 points to its height for every button. Also note that the initial origin of the alert view is off-screen. The next step is initializing and setting up the alert view:
self.alertView = [[UIView alloc] initWithFrame:CGRectMake(initialOriginPoint.x, initialOriginPoint.y, alertViewSize.width, alertViewSize.height)];
// Background color.
[self.alertView setBackgroundColor:[UIColor colorWithRed:0.94 green:0.94 blue:0.94 alpha:1.0]];
// Make the alert view with rounded corners.
[self.alertView.layer setCornerRadius:10.0];
// Set a border to the alert view.
[self.alertView.layer setBorderWidth:1.0];
[self.alertView.layer setBorderColor:[UIColor blackColor].CGColor];
// Assign the initial alert view frame to the respective property.
self.initialAlertViewFrame = self.alertView.frame;
Using alertViewSize and initialOriginPoint, we initialize the alertView object and set its background color. We round the alert view's corners by setting its layer's cornerRadius to 10.0, its borderWidth to 1.0, and its borderColor to black. We also store the alert view's initial frame in its initialAlertViewFrame property as we'll be needing it later.
If Xcode tells you it doesn't know about the alertView's layer property, then add the following import statement at the top of the implementation file:
#import <QuartzCore/QuartzCore.h>
It's time to add the labels. Let's start with the title label.
// Setup the title label.
self.titleLabel = [[UILabel alloc] initWithFrame:CGRectMake(0.0, 10.0, self.alertView.frame.size.width, 40.0)];
[self.titleLabel setText:self.title];
[self.titleLabel setTextAlignment:NSTextAlignmentCenter];
[self.titleLabel setFont:[UIFont fontWithName:@"Avenir-Heavy" size:14.0]];
// Add the title label to the alert view.
[self.alertView addSubview:self.titleLabel];
Setting up the message label is pretty similar.
// Setup the message label.
self.messageLabel = [[UILabel alloc] initWithFrame:CGRectMake(0.0, self.titleLabel.frame.origin.y + self.titleLabel.frame.size.height, self.alertView.frame.size.width, 80.0)];
[self.messageLabel setText:self.message];
[self.messageLabel setTextAlignment:NSTextAlignmentCenter];
[self.messageLabel setFont:[UIFont fontWithName:@"Avenir" size:14.0]];
[self.messageLabel setNumberOfLines:3];
[self.messageLabel setLineBreakMode:NSLineBreakByWordWrapping];
// Add the message label to the alert view.
[self.alertView addSubview:self.messageLabel];
Note that the numberOfLines property is set to 3 and lineBreakMode is set to NSLineBreakByWordWrapping.
The last thing we need to set up are the alert view's buttons. Even though the number of buttons can vary, setting up and positioning the buttons is pretty simple. We separate the buttons by 5 points and use a for loop to initialize them.
Note that each button invokes the handleButtonTap: method when it's tapped. We can determine which button the user tapped by inspecting the button's tag property.
Finally, add the alert view to the target or parent view by adding the following line at the bottom of the setupAlertView method:
// Add the alert view to the parent view.
[self.targetView addSubview:self.alertView];
Step 3: Setting Up the Background View
The second method we need to implement is setupBackgroundView. The background view will prevent the user from interacting with the alert view's parent view as long as the alert view is shown. We initially set its alpha property to 0.0, which means it's transparent.
With setupAlertView and setupBackgroundView ready to use, let's implement the custom initializer we declared earlier.
- (id)initAlertWithTitle:(NSString *)title andMessage:(NSString *)message andButtonTitles:(NSArray *)buttonTitles andTargetView:(UIView *)targetView {
if (self = [super init]) {
// Assign the parameter values to local properties.
self.title = title;
self.message = message;
self.targetView = targetView;
self.buttonTitles = buttonTitles;
// Setup the background view.
[self setupBackgroundView];
// Setup the alert view.
[self setupAlertView];
// Setup the animator.
self.animator = [[UIDynamicAnimator alloc] initWithReferenceView:self.targetView];
}
return self;
}
We set the title, message, targetView, and buttonTitles properties, invoke setupBackgroundView and setupAlertView, and initialize the dynamic animator, passing in self.targetView as its reference view.
4. Showing the Alert View
To show the alert view after it's been initialized, we need to declare and implement a public method that can be called by, for example, the view controller hosting the alert view. Open AlertComponent.h and add the following method declaration:
- (void)showAlertView;
Head back to AlertComponent.m to implement showAlertView. As I mentioned earlier in this tutorial, we'll be using a new UIDynamicBehavior subclass to show the alert view, UISnapBehavior. Let's see how we use this class in showAlertView.
We start by removing any existing dynamic behaviors from the dynamic animator to ensure that no conflicts pop up. Remember that some dynamic behaviors can only be added once to the dynamic animator, such as a gravity behavior. Also, we'll add other dynamic behaviors to dismiss the alert view.
As you can see, using a snap behavior isn't difficult. We specify which dynamic item the behavior should be applied to and set the point to which the dynamic item should snap. We also set the behavior's damping property as we discussed earlier. Also note that we animate the alpha property of the background view.
To test the alert view, we need to make some changes to the ViewController class. Let's start by adding a UIButton instance to the view controller's view to show the alert view. Open Main.storyboard and drag a UIButton instance from the Object Library to the view controller's view. Position the button near the bottom of the view and give it a title of Show Alert View. Add an action to ViewController.h as shown below.
Head back to the storyboard and connect the view controller's action to the button. Open ViewController.m and import the header file of the AlertComponent class.
#import "AlertComponent.h"
Next, declare a property in the private class extension of type AlertComponent and name it alertComponent.
Run your application and tap the button to show the alert view. The result should look similar to the one below.
5. Hiding the Alert View
As we saw earlier, the handleButtonTap: method is invoked when the user taps a button of the alert view. The alert view should hide when one of the buttons is tapped. Let's see how this works.
Revisit AlertComponent.m and, in the private class extension, declare the handleButtonTap: method.
The angle property of the push behavior defines the direction of the push. By setting the angle to M_PI_2, the force of the push behavior is directed towards the bottom of the screen.
The next step is adding the gravity behavior. The vector we pass to setGravityDirection will result in a force towards the top of the screen, pulling the alert view upwards.
We also need a dynamic item behavior for setting the elasticity of the collision. The result is that the alert view will bounce a little when it collides with the off-screen boundary.
Even though the alert view responds to user interaction, we currently don't know which button the user has tapped. That's what we'll focus on in this section.
As we did with the menu component, we're going to make use of blocks to solve this problem. Blocks make for an elegant solution and can often be easier to use than a delegate protocol.
We start by updating the public showAlertView method. The method needs to accept a completion handler that the alert view invokes when the user has tapped one of the buttons. In AlertComponent.h, update the declaration of the showAlertView method from:
The completion handler accepts two parameters, the index, of type NSInteger, and the title, of type NSString, of the button that was tapped by the user. If we want to invoke the completion handler when the user taps a button of the alert view, we need to keep a reference to the completion handler. This means we need to declare a property for the completion handler. We do this in the private class extension in AlertComponent.m.
Still in AlertComponent.m, update the method description like we did in the header file a moment ago and store the completion handler in the selectionHandler property, which just we declared.
The AlertComponent is complete. It's time to test everything. Head back to ViewController.m and update the showAlertView: action as shown below. As you can see, we invoke the new showAlertViewWithSelectionHandler: method and pass in a block, which will be called when a button in the alert view is tapped by the user.
That's it. Run your application once more and inspect Xcode's console to see the result of our work.
Conclusion
UIKit Dynamics was first introduced in iOS 7 and can help you create realistic animations quickly. This short series has illustrated that leveraging UIKit Dynamics in your projects isn't difficult and you don't need to be an expert in math or physics.
Note that UIKit Dynamics is primarily meant for using in view based applications. If you're looking for a similar solution for games, then I recommend taking a look at Apple's Sprite Kit, which is aimed at game development.
Android Studio is a fairly new IDE (Integrated Development Environment) made available for free by Google to Android developers. Android Studio is based on IntelliJ IDEA, an IDE that also offers a good Android development environment. In this tutorial, I'll show you how to create a new Android project and take advantage of the features that Android Studio has to offer.
1. Project Setup
Before start exploring Android Studio, you'll first need to download and install it. Note that you need to have JDK 6 or higher installed. If you're on Windows, launch the .exe file and follow the steps of the setup wizard. If you're running OS X, mount the disk image by double-clicking it and drag Android Studio to your Applications folder.
If you've successfully completed the above steps, then your development environment should be set up correctly. You're now ready to create your first Android application using Android Studio. When you launch Android Studio for the first time, you should be presented with a welcome screen, offering you a number of choices to get you started.
In this tutorial, we're going to choose the New Project option. However, you can choose Import Project if you'd like to import a project from, for example, Eclipse, into Android Studio. Android Studio will convert the Eclipse project to an Android Studio project, adding the necessary configuration files for you.
If you select Open Project from the list of options, you can open projects created with either Android Studio or IntelliJ IDEA. By choosing Check out from Version Control, you can check out a copy of a project that's under version control. This is a great way to quickly get up to speed with an existing project.
To get us started, choose New Project from the list of options. This will show you a list of options to configure your new project. In this tutorial, we're going to create a simple application to show you some of Android Studio's most important features. I'm sure you agree that there's no better name for our project than HelloWorld.
As you can see in the above screenshot, I've named my application HelloWorld and set the module name to HelloWorld. If you're unfamiliar with IntelliJ IDEA, you may be wondering what a module is. A module is a discrete unit of functionality that can be compiled, run, tested, and debugged independently. Modules contain source code, build scripts, and everything else required for their specific task.
When creating a new project, you can also set the package name of the project. By default, Android Studio sets the last element of the project's package name to the name of the module, but you can change it to whatever you want.
The other settings are the project's location on your machine, the minimum and target SDK, the SDK your project will be compiled with, and the project's theme. You can also tell Android Studio to create an Activity class and a custom launch icon for you, and whether the project supports GridLayout, Fragments, a Navigation Drawer, or an Action Bar.
We won't create a custom icon for this application so you can uncheck the checkbox labeled Create custom launch icon. Click Next to continue setting up your project.
Because we checked the checkbox Create activity in the previous step, you are asked to configure the Activity class Android Studio will create for you.
Since we'll be starting with a blank Activity class, you can click Next to proceed to the next step in the setup process in which you're asked to name the Activity class, the main layout, and the fragment layout. You can also set the navigation type, which we'll leave at None for this project. Take a look at the next screenshot to see what your settings should look like.
After clicking Finish, you'll be presented with Android Studio's user interface with the project explorer on the left and the workspace on the right. With your project set up in Android Studio, it's time to explore some of the key features of Android Studio.
2. Android Virtual Devices
An Android Virtual Device or AVD is an emulator configuration, allowing you to model an Android device. This makes running and testing applications on a wide range of devices much easier. With an Android Virtual Device, you can specify the hardware and software the Android Emulator needs to emulate.
The preferred way to create an Android Virtual Device is through the AVD Manager, which you can access in Android Studio by selecting Android > AVD Manager from the Tools menu.
If you're development environment is set up correctly, the Android Virtual Device Manager should look similar to the screenshot below.
To create a new AVD, click New... on the right, give the AVD a name, and configure the virtual device as shown below. Click OK to create your first AVD.
To use your newly created AVD, select it from the list in the AVD manager, and click Start... on the right. If your AVD is set up correctly, the Android Emulator should launch as shown in the screenshot below.
With the Android Emulator up and running, it's time to launch your application by selecting Run 'helloworld' from the Run menu. That's how easy it is to run an application in the Android Emulator.
3. Live Layout
Android Studio's live layout feature lets you preview your application's user interface without the need to run it on a device or the emulator. The live layout feature is a powerful tool that will literally save you hours. Viewing your application's user interface is much faster using live layouts.
To work with live layouts, double-click the XML layout file and select the Text tab at the bottom of the workspace. Select the Preview tab on the right of the workspace to preview the current layout. Any changes you make to the XML layout will be reflected in the preview on the right. Take a look at the screenshot below to get a better idea of this neat feature.
There are a number of other advantages of the live layout feature that are worth pointing out. You can, for example, create a variation of the XML layout you're currently working on by selecting an option from the first menu in the Preview pane. You can, for example, create separate views for portrait and landscape and Android Studio will create the necessary folders and files for you.
The second menu in the Preview pane lets you change the size of the device shown in the Preview pane. The third menu lets you change the orientation of the device shown in the Preview pane, which makes it easy to see how a layout looks in different orientations and using different themes.
The fourth menu in the Preview pane gives you easy access to the Activity or fragment in which the layout is used. The Preview pane also lets you change the language used in the live layout to make it easy to preview a layout in different languages. The rightmost menu lets you change the API version.
The Preview pane also includes controls to zoom in on the layout, refresh the Preview pane, or take a screenshot.
4. Templates
Android Studio provides developers with a number of templates to speed up development. These templates automatically create an Activity and the necessary XML files. You can use these templates to create a basic Android application, which you can then run on a device or in the emulator.
With Android Studio, you can create a template when you create a new Activity. Right-click on the package name in the project navigator on the left, select New from the menu, and choose Activity from the list of options. Android Studio then shows you a list of templates, such as Blank Activity, Fullscreen Activity, and Tabbed Activity.
You can also select Image Asset from the menu, which will launch a wizard that guides you through the creation process. Let me show you how to create a new Activity based on the Login Activity template. Select the Login Activity option from the list of Activity templates to fire up the wizard.
As you can see in the above screenshot, I've named the ActivityLoginActivity, set the Layout Name to activity_login, given the Activity a title of Sign In. The checkbox labeled Include Google+ sign in is checked by default. Uncheck it since we won't be using this feature in our example.
You can optionally set the Hierarchical Parent of the new Activity. This will let you navigate back if you tap the device's back button. We will leave this field empty. After clicking Finish, Android Studio creates the necessary files and folders for you. If all went well, you should see a new Activity and Layout in your project.
The next step is to set up the new Activity in the manifest file so it's used as the main Activity when the application launches. As you can see in manifest file below, the LoginActivity class has its own activity node.
To make your application launch the LoginActivity you created, remove the activity node for the LoginActivity class and replace com.tuts.HelloWorld.MainActivity
with com.tuts.HelloWorld.LoginActivity. The result is that the application will now use the LoginActivity class as its main Activity.
When you build and run your application in the emulator, you should see a screen similar to the one shown below. This means that we've successfully replaced the blank Activity class with the newly created LoginActivity class.
5. Lint Tools
Testing your code is one thing, but it's equally important to apply best practices when writing code. This will improve performance and the overall stability of your application. It's also much easier to maintain a properly structured project.
Android Studio's includes Android Lint, a static analyzer that analyzes your project's source code. It can detect potential bugs and other problems in your code that are the compiler may overlook.
The below screenshot, for example, tells us that the LinearLayout in this layout is of no use. The nice thing about Android Lint is that it gives you a reason for the warning or error, which makes it easier to fix or resolve.
It's good practice to run Android Studio's lint tool from time to time to check your project for potential problems. The lint tool will even tell you if you have duplicate images or translations.
To run the lint tool, select Inspect Code… from the Analyze menu in Android Studio to start the process. When Android Studio has finished inspect your project, it will present you with the results at the bottom of the window. Note that in addition to Android Lint, Android Studio performs a number of other checks as well. Simply double-click an issue to navigate to the file in which the problem is located.
6. Rich Layout Editor
Android Studio has a rich layout editor in which you can drag and drop user interface components. You can also preview layouts on multiple screen configurations as we saw earlier in this tutorial.
The rich layout editor is very straightforward to use. We first need a layout to work with. Navigate to the layout folder in your project's res folder, right-click the layout folder, and select New> Layout resource file from the menu that appears.
Give the new layout a name, set its root element, and click OK. Android Studio will automatically open the layout in the editor on the right.
At the bottom of the editor, you should see two tabs, Design and Text. Clicking the Text tab brings up the editor, allowing you to make changes to the currently selected layout.
Clicking the Design tab brings up another editor that shows you a preview of the layout. To add a widget to the layout, drag it from the list of widgets on the left to the layout on the right. It's that simple.
Conclusion
In this tutorial, we've taken a brief look at some of the key features of Android Studio. It is very similar to IntelliJ IDEA, but it contains a number of important enhancements that make Android development easier, faster, and more enjoyable.
With everything about Cora Data data models still fresh in your mind, it's time to start working with Core Data. In this article, we meet NSManagedObject, the class you'll interact with most when working with Core Data. You'll learn how to create, read, update, and delete records.
You'll also get to know a few other Core Data classes, such as NSFetchRequest and NSEntityDescription. Let me start by introducing you to NSManagedObject, your new best friend.
1. Managed Objects
Instances of NSManagedObject represent a record in Core Data's backing store. Remember, it doesn't matter what that backing store looks like. However, to revisit the database analogy, an NSManagedObject instance contains the information of a row in a database table.
The reason Core Data uses NSManagedObject instead of NSObject as its base class for modeling records will make more sense a bit later. Before we start working with NSManagedObject, we need to know a few things about this class.
NSEntityDescription
Each NSManagedObject instance is associated with an instance of NSEntityDescription. The entity description includes information about the managed object, such as the entity of the managed object as well its attributes and relationships.
NSManagedObjectContext
A managed object is also linked to an instance of NSManagedObjectContext. The managed object context to which a managed object belongs, monitors the managed object for changes.
2. Creating a Record
With the above in mind, creating a managed object is pretty straightforward. To make sure a managed object is properly configured, it is recommended to use the designated initializer for creating new NSManagedObject instances. Let's see how this works by creating a new person object.
Open the project from the previous article or clone it from GitHub. Because we won't be building a functional application in this article, we'll do most of our work in the application delegate class, TSPAppDelegate. Open TSPAppDelegate.m and update the implementation of application:didFinishLaunchingWithOptions: as shown below.
The first thing we do, is creating an instance of the NSEntityDescription class by invoking entityForName:inManagedObjectContext:. We pass the name of the entity we want to create a managed object for, @"Person", and a NSManagedObjectContext instance.
Why do we need to pass in a NSManagedObjectContext object? We specify the name that we want to create a managed object for, but we also need to tell Core Data where it can find the data model for that entity. Remember that a managed object context is tied to a persistent store coordinator and a persistent store coordinator keeps a reference to a data model. When we pass in a managed object context, Core Data asks its persistent store coordinator for its data model to find the entity we're looking for.
In the second step, we invoke the designated initializer of the NSManagedObject class, initWithEntity:insertIntoManagedObjectContext:. We pass in the entity description and a NSManagedObjectContext instance. Wait? Why do we need to pass in another NSManagedObjectContext instance? Remember what I wrote earlier. A managed object is associated with an entity description and it lives in a managed object context, which is why we tell Core Data which managed object context the new managed object should be linked to.
This isn't too complex, is it? We've now created a new person object. How do we change its attributes or define a relationship? This is done by leveraging key-value coding. To change the first name of the new person object we just created we do the following.
If you're familiar with key-value coding, then this should look very familiar. Because the NSManagedObject class supports key-value coding, we change an attribute by invoking setValue:forKey:. It's that simple.
One downside of this approach is the ease with which you can introduce bugs by misspelling an attribute or relationship name. Also, attribute names are not autocompleted by Xcode like, for example, property names are. We can remedy this problem, but that's something we'll take a look at a bit later in this series.
Before we continue our exploration of NSManagedObject, let's set the age of newPerson to 44.
[newPerson setValue:@44 forKey:@"age"];
If you're unfamiliar with key-value coding, then you might be surprised that we passed in an NSNumber literal instead of an integer, like we defined in our data model. The setValue:forKey: method only accepts objects, no primitives. Keep this in mind.
3. Saving a Record
Even though we now have a new person instance, Core Data hasn't saved the person to its backing store yet. The managed object we created currently lives in the managed object context in which it was inserted. To save the person object to the backing store, we need to save the changes of the managed object context by calling save: on it.
The save: method returns a boolean to indicate the result of the save operation and accepts a pointer to an NSerror object, telling us what went wrong if the save operation is unsuccessful. Take a look at the following code block for clarification.
NSError *error = nil;
if (![newPerson.managedObjectContext save:&error]) {
NSLog(@"Unable to save managed object context.");
NSLog(@"%@, %@", error, error.localizedDescription);
}
Build and run the application to see if everything works as expected. Did you also run into a crash? What did the console output tell you? Did it look similar to the output below?
Xcode tells us that it expected an NSDate instance for the first attribute, but we passed in an NSString. If you open the Core Data model we created in the previous article, you'll see that the type of the first attribute is indeed Date. Change it to String and run the application one more time.
Another crash? Even though this is a more advanced topic, it's important to understand what's going on.
Data Model Compatibility
The output in Xcode's console should look similar to the output shown below. Note that the error is different from the previous one. Xcode tells us that the model to open the store is incompatible with the one used to create the store. How did this happen?
Unresolved error Error Domain=NSCocoaErrorDomain Code=134100 "The operation couldn’t be completed. (Cocoa error 134100.)" UserInfo=0xcb17a30 {metadata={
NSPersistenceFrameworkVersion = 508;
NSStoreModelVersionHashes = {
Address = <268460b1 0507da45 f37f8fb5 b17628a9 a56beb9c 8666f029 4276074d 11160d13>;
Person = <68eb2a17 12dfaf41 510772c0 66d91b3d 7cdef207 4948ac15 f9ae22cc fe3d32f2>;
};
NSStoreModelVersionHashesVersion = 3;
NSStoreModelVersionIdentifiers = (
""
);
NSStoreType = SQLite;
NSStoreUUID = "EBB4C708-F933-4E74-8EE0-47F9972EE523";
"_NSAutoVacuumLevel" = 2;
}, reason=The model used to open the store is incompatible with the one used to create the store}, {
metadata = {
NSPersistenceFrameworkVersion = 508;
NSStoreModelVersionHashes = {
Address = <268460b1 0507da45 f37f8fb5 b17628a9 a56beb9c 8666f029 4276074d 11160d13>;
Person = <68eb2a17 12dfaf41 510772c0 66d91b3d 7cdef207 4948ac15 f9ae22cc fe3d32f2>;
};
NSStoreModelVersionHashesVersion = 3;
NSStoreModelVersionIdentifiers = (
""
);
NSStoreType = SQLite;
NSStoreUUID = "EBB4C708-F933-4E74-8EE0-47F9972EE523";
"_NSAutoVacuumLevel" = 2;
};
reason = "The model used to open the store is incompatible with the one used to create the store";
}
When we first launched the application a few moments ago, Core Data inspected the data model and, based on that model, created a store for us, a SQLite database in this case. Core Data is clever though. It makes sure that the structure of the backing store and that of the data model are compatible. This is vital to make sure that we get back from the backing store what we expect and what we put there in the first place.
During the first crash, we noticed that our data model contained a mistake and we changed the type of the first attribute from Date to String. In other words, we changed the data model even though Core Data had already created the backing store for us based on the incorrect data model.
After updating the data model, we launched the application again and ran into the second crash. One of the things Core Data does when it creates the Core Data stack is making sure the data model and the backing store—if one exists—are compatible. That was not the case in our example hence the crash.
How do we solve this? The easy solution is to remove the application from the device or from the iOS Simulator, and launch the application again. However, this is something you cannot do if you already have an application in the App Store that people are using. In that case, you make use of migrations, which is something we'll discuss in a future article.
Because we don't have millions of users using our application, we can safely remove the application from our test device and run it once more. If all went well, the new person is now safely stored in the store, the SQLite database Core Data created for us.
Inspecting the Backing Store
You can verify that the save operation worked by taking a look inside the SQLite database. If you ran the application in the iOS Simulator, then navigate to ~/<USER>/Library/Application Support/iPhone Simulator/<VERSION>/<OS>/Applications/<ID>/Documents/Core_Data.sqlite. To make your life easier, I recommend you install SimPholders, a tool that makes navigating to the above path much, much easier. Open the SQLite database and inspect the table named ZPERSON. The table should have one entry, the one we inserted a minute ago.
You should keep two things in mind. First, there's no need to understand the database structure. Core Data manages the backing store for us and we don't need to understand its structure to work with Core Data. Second, never access the store directly. Core Data is in charge of the backing store and we need to respect that if we want Core Data to do its job well. If we start interacting with the SQLite database—or any other store—there is no guarantee Core Data will continue to function properly. In short, Core Data is in charge of the store so leave it alone.
4. Fetching Records
Even though we'll take a close look at NSFetchRequest in the next article, we need the NSFetchRequest class to ask Core Data for information from the object graph it manages. Let's see how we can fetch the record we inserted earlier using NSFetchRequest.
After initializing the fetch request, we create an NSEntityDescription object and assign it to the entity property of the fetch request. As you can see, we use the NSEntityDescription class to tell Core Data what entity we're interested in.
Fetching data is handled by the NSManagedObjectContext class, we invoke executeFetchRequest:error:, passing in the fetch request and a pointer to an NSError object. The method returns an array of results if the fetch request is successful and nil if a problem is encountered. Note that Core Data always returns an NSArray object if the fetch request is successful, even if we expect one result or if Core Data didn't find any matching records.
Run the application and inspect the output in Xcode's console. Below you can see what was returned, an array with one object of type NSManagedObject. The entity of the object is Person.
To access the attributes of the record, we make use of key-value coding like we did earlier. It's important to become familiar with key-value coding if you plan to work with Core Data.
You may be wondering why I log the person object before and after logging the person's name. This is actually one of the most important lessons of this article. Take look at the output below.
The first time we log the person object to the console, we see data: <fault>. The second time, however, data contains the contents of the object's attributes and relationships. Why is that? This has everything to do with faulting, a key concept of Core Data.
5. Faulting
The concept that underlies faulting isn't unique to Core Data. If you've ever worked with Active Record in Ruby on Rails, then the following will certainly ring a bell. The concept isn't identical, but the similar from a developer's perspective.
Core Data tries to keep its memory footprint as low as possible and one of the strategies it uses to accomplish this is faulting. When we fetched the records for the Person entity a moment ago, Core Data executed the fetch request, but it didn't fully initialize the managed objects representing the fetched records.
What we got back is a fault, a placeholder object representing the record. The object is of type NSManagedObject and we can treat it as such. By not fully initializing the record, Core Data keeps its memory footprint low. It's not a significant memory saving in our example, but imagine what would happen if we fetched dozens, hundreds, or even thousands of records.
Faults are generally nothing that you need to worry about. The moment you access an attribute or relationship of a managed object, the fault is fired, which means that Core Data changes the fault into a realized managed object. You can see this in our example and that's also the reason why the second log statement of the person object doesn't print a fault to the console.
Faulting is something that trips up many newcomers and I therefore want to make sure you understand the basics of this concept. We'll learn more about faulting later in this series.
6. Updating Records
Updating records is just as simple as creating a new record. You fetch the record, change an attribute or relationship, and save the managed object context. The idea is the same as when you create a record. Because the managed object, the record, is linked to a managed object context, the latter is aware of any changes, insertions and updates. When the managed object context is saved, everything is propagated to the backing store by Core Data.
Take a look at the following code block in which we update the record we fetched by changing the person's age and saving the changes.
NSManagedObject *person = (NSManagedObject *)[result objectAtIndex:0];
[person setValue:@30 forKey:@"age"];
NSError *saveError = nil;
if (![person.managedObjectContext save:&saveError]) {
NSLog(@"Unable to save managed object context.");
NSLog(@"%@, %@", saveError, saveError.localizedDescription);
}
You can verify that the update was successful by taking another look at the SQLite store as we did earlier.
7. Deleting Records
Deleting a record follows the same pattern as creating and updating records. We tell the managed object context that a record needs to be deleted from the persistent store by invoking deleteObject: and passing the managed object that needs to be deleted.
In our project, delete the person object we fetched earlier by passing it to the managed object context's deleteObject: method. Note that the delete operation isn't committed to the backing store until we call save: on the managed object context.
NSManagedObject *person = (NSManagedObject *)[result objectAtIndex:0];
[self.managedObjectContext deleteObject:person];
NSError *deleteError = nil;
if (![person.managedObjectContext save:&deleteError]) {
NSLog(@"Unable to save managed object context.");
NSLog(@"%@, %@", deleteError, deleteError.localizedDescription);
}
Conclusion
In this tutorial, we've covered a lot more than just creating, fetching, updating, and deleting records. We've touched on a few important concepts on which Core Data relies, such as faulting and data model compatibility.
In the next installment of this series, you'll learn how to create and update relationships, and we take an in-depth look at the NSFetchRequest class. We'll also start using NSPredicate and NSSortDescriptor to make our fetch requests flexible, dynamic, and powerful.
When you’re planning to take the plunge and develop your first Android application, it’s easy to get intimidated by the jargon-packed list of tools you’ll need to assemble. However, in reality downloading and preparing the Android development environment is a straightforward process, thanks to the handy, all-in-one bundles that give you instant access to most, if not all, of the tools you need.
The drawback of downloading everything in one bundle is that it’s easy to lose track of the tools included in your Android development environment, and you might not have a clear idea what each tool is for.
This article will demystify the major tools you’ll use to develop your first Android application. For those who want to enhance their Android projects with additional functionality, this article also provides a brief introduction to Google Play Services, which you can use to add Google+ and Google Maps content to your app, and also provides a way to monetize your Android apps.
However, before you can assemble your Android developer toolkit, you first need to make a decision, which integrated development environment (IDE) are you going to develop your apps in?
1. Eclipse or Android Studio?
Up until recently, Eclipse with the ADT (Android Development Tools) plugin was the recommended environment for developing Android apps. However, at Google I/O 2013, Google shook things up by announcing their own IDE, Android Studio, designed specifically for Android development.
The release of Android Studio has made life more complicated for Android developers, who now have to weigh up the pros and cons of both IDEs and decide which is right for them.
The key to deciding whether Android Studio or Eclipse should be your development environment is identifying what you’re looking for in an IDE.
Streamlined or Feature-Packed User Interface
Eclipse provides a common development environment that can be extended through plugins that allow you to develop a range of apps in different programming languages, all within the same IDE.
For Android development, Eclipse is extended through the Android Development Tools or ADT plugin. Although ADT was designed specifically for Android development, Eclipse was not, which means it includes a lot of features that have nothing to do with developing Android apps.
If you're an experienced Eclipse user, then chances are you’re already familiar with Eclipse's busy, feature-rich user interface, but if you’re new to Eclipse, then you’ll need to spend some time identifying what's relevant to you as an Android developer and what's just cluttering up the user interface.
This is where Android Studio has an advantage compared to Eclipse. Android Studio has a bare-bones user interface and a modest set of features, but everything it contains is geared towards helping you develop Android apps.
Established Community or Going It Alone
Eclipse is an established IDE with a thriving community, which means there's no shortage of places to turn to when you need help, such as blogs, tutorials, Google groups, video guides, forums, or the extensive Eclipse and ADT documentation.
This is in stark contrast to Android Studio. As a new project, Android Studio simply hasn’t had the time to build up the same wealth of resources. Although some of the Eclipse-based resources may also be applicable to Android Studio tasks, if you have specific questions about the Android Studio environment, then Eclipse-focused information is going to be of little use.
Stability or New Technology
As an established IDE, Eclipse is a stable and reliable piece of software, whereas Android Studio is currently only available as an early access preview and comes with a disclaimer that you should expect to encounter bugs and missing features.
Android Studio has the innovative features you'd expect from a brand new IDE, but this is offset by its early access status. Depending on your situation, the lack of an official Android Studio release may be a deal breaker. If you’re planning to work on a small, personal project, then bugs and missing features may not be too much of a concern. However, if you’re looking for an IDE in which to develop a commercial Android application that's crucial to your latest business venture, then the early access preview of Android Studio may not be the most sensible option.
Are you familiar with Gradle?
Android Studio comes with a Gradle plugin and, if you choose this IDE, you'll ultimately use Gradle to automate the building, testing, publishing, and deployment of your Android apps.
If you’re not familiar with Gradle or don’t have the time or inclination to learn Gradle, then you may prefer to go down the Eclipse route, because Gradle is so tightly integrated into Android Studio that you’ll struggle to use any other build tool.
Conclusion
There's no easy answer to the "Android Studio or Eclipse" question as your decision will ultimately depend on individual factors, such as the software you're already familiar with, how much time and inclination you have to learn new technologies, and the nature of the Android apps you want to develop.
For example, if you have lots of time to dedicate to learning new technologies and like the sound of Gradle, then you’re more likely to opt for Android Studio. However, if you're an experienced Eclipse user with a busy schedule who isn't particularly excited by the prospect of getting to know a new IDE, then developing in Eclipse is probably the most sensible option.
Regardless of whether you opt for Android Studio or Eclipse with the ADT plugin, when you grab your all-in-one bundle, you'll get access to the same set of tools, known as the Android SDK.
2. What is the Android SDK?
The Android SDK consists of various tools that are essential for creating Android apps, from libraries to source code, sample projects, and much more. An exhaustive look at everything the Android SDK has to offer is beyond the scope of this article, but there's a few tools that are essential to developing Android apps, which you should familiarize yourself with as soon as possible.
Emulator
The Android SDK includes a mobile device emulator that lets you test your Android apps across a wide range of devices without actually having to purchase said devices.
As it name implies, the emulator has the power to emulate different Android devices by running various Android Virtual Device (AVD) configurations. During the lifecycle of a typical Android project, you'll create a range of AVD configurations for the emulator with each AVD mimicking a different Android device.
AVD Manager
The AVD Manager is where you create, edit, repair, delete, and launch your AVD configurations. The AVD Manager also contains a list of known device definitions, which is handy when you want to emulate a particular device, but are unsure of its hardware and software specifications.
SDK Manager
The Android SDK separates its various tools, platforms, APIs, and other components into different packages that you update and download via the SDK Manager. Some of these packages are recommended, or even installed automatically when you download the Android SDK, but many of these packages are optional and will only be of interest to you if you're developing a certain kind of Android application.
DDMS
Dalvik Debug Monitor Server or DDMS is a debugging tool that can perform various debugging tasks, such as tracking which objects are being allocated to memory, which threads are currently running, and how much heap memory a particular process is using.
DDMS also includes a Detailed Network Usage tab that tracks network requests and analyzes how your application is transferring data. Although DDMS fulfills the same purpose in both Android Studio and Eclipse, the way you access it is different. Eclipse users can access DDMS by going to Window > Open Perspective > DDMS, whereas Android Studio users access DDMS by clicking the Monitor button in the toolbar (the button with the plain Android icon).
Lint
Lint is a code scanning tool that helps improve the structural quality of your code by checking an Android project's source files for bugs and areas that could potentially be optimized.
Although Lint is included in the Android SDK, the way you launch Lint differs depending on whether you're using Eclipse or Android Studio. In Android Studio, you run Lint by selecting Analyze > Inspect Code. In Eclipse, Lint runs automatically whenever you make changes to your project via the layout editor or XML files, and it also runs whenever you export a project.
Android Debug Bridge
Also known as adb, Android Debug Bridge lets you perform a range of debugging tasks by typing instructions directly into the command line. You'll find a comprehensive table of adb commands at the official Android documentation.
Regardless of whether you opt for Android Studio or Eclipse, you have access to all of the aforementioned Android SDK tools. However, the rest of your toolkit varies, depending on which IDE you choose.
3. Eclipse Toolkit
Despite the competition from Android Studio, Eclipse with the ADT plugin has lots to offer to Android developers. If you opt for Eclipse as your IDE, you can download an all-in-one bundle that includes the Android SDK and the following additions.
Eclipse
This IDE provides a common development environment that you can customize with different plugins.
ADT Plugin
The Android Development Tools plugin extends the Eclipse environment with Android-specific features, including a project creation wizard that automatically generates the basic file structure of your Android application and custom XML editors that help you write valid code for your resource files and Android manifest.
The ADT plugin also provides a graphical user interface to many SDK tools that you could otherwise only access from the command line, such as the Android Debug Bridge and DDMS, which we discussed earlier.
Setting up Eclipse with the ADT plugin as your development environment is a straightforward process:
Once the download is complete, unzip the ADT package.
Unzip the archive and open it. It'll be named adt-bundle followed by the version number.
Launch Eclipse by opening the Eclipse folder and double-clicking the Eclipse application icon.
Eclipse stores all the projects you create in a so-called workspace. On Windows, this is by default created in C:\Users\Name\Documents\workspace. Change this path if like and click OK.
Eclipse launches with the ADT plugin and Android SDK already integrated. This means that as soon as Eclipse is launched, you're ready to start creating your first Android application.
4. Android Studio Toolkit
Even though it's only available as an early access preview, Android Studio has some interesting features, not to mention Google's seal of approval. If you decide to download the Android Studio bundle, you'll get the Android SDK and the following additions.
Android Studio
In the world of integrated development environments, Android Studio is uniquely positioned as the IDE that's designed specifically for developing Android apps.
Gradle
Android Studio comes with a built-in Android plugin for Gradle and uses Gradle as its build system. In Android Studio, you use Gradle to perform tasks such as customizing, configuring, and extending your project's build process and managing dependencies from your local file system and from remote repositories.
Gradle can also help you support as many devices as possible by generating multiple APKs with different configurations from a single Android project.
If you decide to use the early access preview of Android Studio, download the latest version from the Android Developer website. On Windows, launch the executable to open Android Studio and start developing Android apps.
5. Google Play Services
This article has already introduced you to the Android SDK tools and shown you how to install and set up your IDE of choice. However, if you want to create a richer experience for your users, then you may want to add Google Play Services to your development environment.
Google Play Services are optional extras that enable you to add more functionality and features to your Android apps. Google Play Services have lots to offer to the Android developer. Let's take a look at a few of them.
Google+
Enrich your app with Google+ content. The Google+ Platform service can help you provide a personalized experience for your users by pulling content from their Google+ account into your app. For example, your app could use Google+ information to greet the user by name or use their Google+ profile picture as their avatar.
Alternatively, your app can push information to Google+, for example, letting users post their top scores and other in-app achievements to their Google+ profile or send invites to their Google+ contacts.
Google Maps
Embed Google Maps content in your app, including 3D maps, hybrid maps, and even Google Street View content. Note that the Google Maps Android API does require an API key, which you can obtain through the Google APIs Console.
Google Play In-App Billing
This service allows you to monetize your Android projects by selling digital content through your app. This content can be downloadable, such as pictures or videos, or virtual content, for example, new levels in a game, unlockable features, or in-game goods, such as gems and extra lives.
Google Play handles these transactions for you, so you don’t need to worry about building your own checkout and billing functionality. Note that in-app billing does require you to create a Google Play Developer Console account and a Google Wallet merchant account. You'll also need to install the Google Play Billing library. You can do this by launching the SDK Manager, opening the Extras section, selecting Google Play Billing library, and clicking Install packages.
This is just a selection of what Google Play Services has to offer. You can get more information about Google Play Services at the official Android documentation.
Before you can take advantage of Google Play Services, you need to download an additional package. You can do this following these steps:
In your IDE of choice, open the Android SDK Manager.
Expand the Extras section.
Select Google Play Services. Note that if you're using Android Studio you'll also need to install Google Repository, which is located in the Extras category.
After clicking Install packages, the SDK Manager will go ahead and install Google Play Services.
Conclusion
Regardless of whether you choose Eclipse or Android Studio as your IDE, you should now have a better understanding of the ecosystem of tools used in Android development, and how these tools fit together in the wider context of your IDE.
If you've been following along with this tutorial, your development environment should now be installed and ready to go. The only thing left to do is create a new Android project and start developing.
In this tutorial, you'll learn how to create a mobile 3D game using C# and Unity. The objective of the game is to score as many points as possible. You'll learn the following aspects of Unity game development:
Setting up a 3D project in Unity
Implementing tap controls
Integrating physics
Creating Prefabs
1. Create a New Unity Project
Open Unity and select New Project from the File menu to open the new project dialog. Tell Unity where you want to save the project and set theSet up defaults for: menu to 3D.
2. Build Settings
In the next step, you're presented with Unity's user interface. Set the project up for mobile development by choosing Build Settings from the File menu and select your preferred platform. I've chosen Android for this tutorial.
3. Devices
The first thing we need to do after selecting the platform we're targeting is choosing the size of artwork that we'll use in the game. I've listed the most important devices for each platform below and included the device's screen resolution and pixel density.
iOS
iPad: 1024px x 768px
iPad Retina: 2048px x 1536px
3.5" iPhone/iPod Touch: 320px x 480px
3.5" iPhone/iPod Retina: 960px x 640px
4" iPhone/iPod Touch: 1136px x 640px
Android
Because Android is an open platform, there are many different devices, screen resolutions, and pixel densities. A few of the more common ones are listed below.
Asus Nexus 7 Tablet: 800px x 1280px, 216ppi
Motorola Droid X: 854px x 480px, 228ppi
Samsung Galaxy S3: 720px x 1280px, 306ppi
Windows Phone
Nokia Lumia 520: 400px x 800px, 233ppi
Nokia Lumia 1520: 1080px x 1920px, 367ppi
BlackBerry
Blackberry Z10: 720px x 1280px, 355ppi
Remember that the code used for this tutorial can be used to target any of the above platforms.
4. Export Graphics
Depending on the devices you're targeting, you may need to convert the artwork for the game to the recommended size and resolution. You can do this in your favorite image editor. I've used the Adjust Size... function under the Tools menu in OS X's Preview application.
5. Unity User Interface
Before we get started, make sure to click the 3D button in the Scene panel. You can also modify the resolution that's being displayed in the Game panel.
6. Game Interface
The interface of our game will be straightforward. The above screenshot gives you an idea of the artwork we'll be using and how the game's interface will end up looking. You can find the artwork and additional resources in the source files of this tutorial.
7. Programming Language
You can use one of three programming languages when using Unity, C#, UnityScript, a variation of JavaScript, and Boo. Each programming language has its pros and cons, and it's up to you to decide which one you prefer. My personal preference goes to the C# programming language so that's the language I'll be using in this tutorial.
If you decide to use another programming language, then make sure to take a look at Unity's Script Reference for examples.
8. Sound Effects
I'll use a number of sounds to improve the audial experience of the game. The sound effects used in this tutorial were obtained from as3sfxr and Soungle.
9. 3D Models
To create our game, we first need to get our 3D models. I recommend 3Docean for high quality models, textures, and more, but if you're testing or still learning then free models may be a good place to start.
The models in this tutorial were downloaded from SketchUp 3D Warehouse where you can find a good variety of models of all kinds.
Because Unity doesn't recognize the SketchUp file format, we need to convert SketchUp files to a file format Unity can import. Start by downloading the free version of SketchUp, SketchUp Make.
Open your 3D model in SketchUp Make and go select Export > 3D Model from the File menu and choose Collada (*.dae) from the list of options.
Choose a name, select a directory, and click Export. A file and a folder for the 3D model will be created. The file contains the 3D object data and the folder the textures used by the model. You can now import the model into Unity as explained in the next step.
10. Import Assets
Before we start coding, we need to add our assets to the Unity project. You can do this one of several ways:
Select Import New Asset from the Assets menu.
Add the items to the assets folder of your project.
Drag and drop the assets in the project window.
After completing this step, you should see the assets in your project's Assets folder in the Project panel.
11. Create Scene
We're ready to create the scene of our game by dragging objects to the Hierarchy or Scene panel.
12. 2D Background
Start by dragging and dropping the background into the Hierarchy panel. It should automatically appear in the Scene panel. Adjust the Transform values in the Inspector as shown in the next screenshot.
13. Hoop
The objective of the game is throw the ball through the hoop. Drag it from the Assets panel to the Scene and change its Transform properties as shown in the below screenshot.
14. Light
As you may have noticed, the basketball hoop is a bit too dark. To fix this, we need to add a Light to our scene. Go to GameObject > Create Other and select Directional Light. This will create an object that will produce a beam of light. Change its Transform values as shown in the next screenshot so that it illuminates the basketball hoop.
15. Hoop Collider
With the basketball hoop properly lighted, its time to add a collider so the ball doesn't go through when it hits the white area.
Click the Add Component button in the Inspector panel, select Physics > Box Collider, and change its values as shown in the next screenshot.
You'll see a green border around the basketball hoop in the Scene panel representing the box collider we just added.
16. Bounce Physics Material
If we were throw a ball at the basketball hoop, it would be stopped by the box collider, but it would stop without bouncing like you'd expect it to in the real world. To remedy this we need a Physics Material.
After selecting Create > Physics Material from the Assets menu,you should see it appear in the Assets panel. I changed the name to BounceMaterial.
Change its properties in the Inspector panel to match the ones in this below screenshot.
Next, select the box collider of the basketball hoop and click on the little dot to the right of the Material text, a window should appear where you can select the physics material.
17. Basket Collider
We'll use another collider to detect when the ball passes through the hoop. This should be a trigger collider to make sure it detects the collision without interacting with the physics body.
Create a new collider for the hoop as shown in step 15 and update its values as shown in the next screenshot.
This will place the collider below the ring where the ball can't go back upwards, meaning that a basket has been made. Be sure to check the Is Trigger checkbox to mark it as a trigger collider.
18. Ring Mesh Collider
Time to add a collider to the ring itself. Because we need the ball to pass through the center of the ring, we can't have a sphere or box collider, instead we'll use a Mesh Collider.
A Mesh Collider allows us to use the shape of the 3D object as a collider. As the documentation states the Mesh Collider builds its collision representation from the mesh attached to the GameObject.
Select the hoop from the Hierarchy panel, click on the triangle on its left to expand its hierarchy, expand group_17, and select the element named Ring.
Add a collider as we saw in step 15, but make sure to select Mesh Collider. Unity will then automatically detect the shape of the model and create a collider for it.
19. Hoop Sound
To play a sound when the ball hits the hoop, we first need to attach it. Select it from the Hierarchy or Scene view, click the Add Component button in the Inspector panel, and select Audio Source in the Audio section.
Uncheck Play on Awake and click the little dot on the right, below the gear icon, to select the sound you want to play.
20. Ball
Let's now focus on the basketball. Drag it from the Assets folder and place it in the scene. Don't worry about the ball's position for now, because we'll convert it to a Prefab later.
To make the ball detect when it hits the hoop, we need to add a component, a Sphere Collider to be precise. Select the ball in the scene, open the Inspector panel, and click Add Component. From the list of components, select Sphere Collider from the Physics section and update its properties as shown below.
21. Ball RigidBody
To detect a collision with the basketball, at least one of the colliding objects needs to have a RigidBody component attached to it. To add one to the ball, select Add Component in the Inspector panel, and choose Physics> RigidBody.
Leave the settings at their defaults and drag the ball from the Hierarchy panel to the Assets panel to convert it to a Prefab.
22. Hoop Sprite
To represent the baskets already made by the player, we use a 2D version of the basketball hoop. Drag it from the Assets panel and place it on the scene as shown below.
23. Score Text
Below the 2D hoop, we display the number of baskets the player has scored so far. Select GameObject > Create Other > GUI Text to create a text object, place it at the bottom of the basketball hoop, and change the text in the Hierarchy panel to 0.
You can embed a custom font by importing it in the Assets folder and changing the Font property of the text in the Inspector.
24. Force Meter
The force meter is a bar that will show the force used to shoot the ball. This will add another level of difficulty to the game. Drag the sprites for the force meter from the Assets panel to the Scene and position them as shown in the screenshot below.
25. Ball Sprite
We also add an indicator to the interface showing how many shots the player has left. To complete this step, follow the same steps we used to display the player's current score.
26. Basket Script
It's finally time to write some code. The first script that we'll create is the Basket script that checks if the ball passes through the ring or hits the board.
Select the hoop and click the Add Component button in the Inspector panel. Select New Script and name it Basket. Don't forget to change the language to C#. Open the newly created file and add the following code snippet.
using UnityEngine;
using System.Collections;
public class Basket : MonoBehaviour
{
public GameObject score; //reference to the ScoreText gameobject, set in editor
public AudioClip basket; //reference to the basket sound
void OnCollisionEnter() //if ball hits board
{
audio.Play(); //plays the hit board sound
}
void OnTriggerEnter() //if ball hits basket collider
{
int currentScore = int.Parse(score.GetComponent().text) + 1; //add 1 to the score
score.GetComponent().text = currentScore.ToString();
AudioSource.PlayClipAtPoint(basket, transform.position); //play basket sound
}
}
In this script, we set two public variables that represent objects on the Scene and in the Assets folder. Go back to the editor and click the little dot on the right of the variables to select the values described in the comments.
We play a sound when the ball hits the basketball hoop and check if the ball passes through the ring. The Parse method will convert the text from the GUI Text game object to a number so we can increment the score and then set it again as text using toString. At the end, we play the basket sound.
27. Shoot Script
The Shoot class handles the rest of the game interaction. We'll break the script's contents down to make it easier to digest.
Start by selecting the Camera and click the Add Component button in the Inspector panel. Select New Script and name it Shoot.
28. Variables
In the next code snippet, I've listed the variables that we'll use. Read the comments in the code snippet for clarification.
using UnityEngine;
using System.Collections;
public class Shoot : MonoBehaviour
{
public GameObject ball; //reference to the ball prefab, set in editor
private Vector3 throwSpeed = new Vector3(0, 26, 40); //This value is a sure basket, we'll modify this using the forcemeter
public Vector3 ballPos; //starting ball position
private bool thrown = false; //if ball has been thrown, prevents 2 or more balls
private GameObject ballClone; //we don't use the original prefab
public GameObject availableShotsGO; //ScoreText game object reference
private int availableShots = 5;
public GameObject meter; //references to the force meter
public GameObject arrow;
private float arrowSpeed = 0.3f; //Difficulty, higher value = faster arrow movement
private bool right = true; //used to revers arrow movement
public GameObject gameOver; //game over text
29. Increase Gravity
Next, we create the Start method in which we set the gravity force to -20 to make the ball drop faster.
To handle interactions with the physics engine, we implement the FixedUpdate method. The difference between this method and the regular Update method is that FixedUpdate runs based on physics steps instead of every frame, which might cause problems if the device is slow due to a shortage of memory, for example.
In the FixedUpdate method, we move the arrow of the force meter using the right variable to detect when to reverse the arrow's movement.
void FixedUpdate()
{
/* Move Meter Arrow */
if (arrow.transform.position.x < 4.7f && right)
{
arrow.transform.position += new Vector3(arrowSpeed, 0, 0);
}
if (arrow.transform.position.x >= 4.7f)
{
right = false;
}
if (right == false)
{
arrow.transform.position -= new Vector3(arrowSpeed, 0, 0);
}
if ( arrow.transform.position.x <= -4.7f)
{
right = true;
}
31. Shoot Ball
The basketball is thrown when the player taps the screen. Whenever the screen is tapped, we first check if there's already a ball in the air and if the player has shots available. If these requirements are met, we update the values, create a new instance of the ball, and throw it using the addForce method.
In the following code block, we test if the ball reaches the floor and remove when it does. We also prepare for the next throw by resetting the variables.
/* Remove Ball when it hits the floor */
if (ballClone != null && ballClone.transform.position.y < -16)
{
Destroy(ballClone);
thrown = false;
throwSpeed = new Vector3(0, 26, 40);//Reset perfect shot variable
33. Check Available Shots
After removing the ball, we verify that the player has shots left. If this isn't the case, then we end the game and call restart.
/* Check if out of shots */
if (availableShots == 0)
{
arrow.renderer.enabled = false;
Instantiate(gameOver, new Vector3(0.31f, 0.2f, 0), transform.rotation);
Invoke("restart", 2);
}
}
}
34.restart
The restart method runs two seconds after the player runs out of shots, restarting the game by invoking LoadLevel.
It's time to test the game. Press Command-P to play the game in Unity. If everything works as expected, then you're ready for the final steps.
36. Player Settings
When you're happy with your game, it's time to select Build Settings from the File menu and click the Player Settings button. This should bring up the Player Settings in the Inspector panel where you can set the parameters for your application.
The settings are application and include the creator or company, application resolution and display mode, rendering mode, device compatibility, etc. The settings will differ depending on the platform and devices your application is targeting and also keep the requirements of store you're publishing on in mind.
37. Icons and Splash Images
Using the artwork you created earlier, you can now create a nice icon and a splash image for your game. Unity shows you the required sizes, which depend on the platform you're building for.
38. Build and Play
Once your project is properly configured, it's time to revisit the Build Settings and click the Build Button. That's all it takes to build your game for testing and/or distribution.
Conclusion
In this tutorial, we've learned about 3D models, mesh colliders, physics materials, collision detection, and other aspects of Unity game development. I encourage you to experiment with the result and customize the game to make it your own. I hope you liked this tutorial and found it helpful.
If you followed along with the first part of this tutorial, then you probably already know that the installation process of Xamarin is not overly complicated. Xamarin has created a very nice installer that does most of the requirements verification for you and you just need to check a few boxes and click Next. While the process of getting Xamarin.iOS up and running is quite similar, there will be a few differences depending on which platform you are developing on, OS X or Windows.
Checklist
You will quickly find that the process of creating iOS applications in C# is slightly more involved than the Android process. It's not because the SDK is any more difficult to understand, but there are a few more moving parts. In order to complete this tutorial and be able to successfully create iOS applications in C#, you're going to need the following:
a Mac, regardless of whether or not you are using Visual Studio on a PC
Xcode
the latest iOS SDK
the latest Xamarin.iOS SDK
an Apple Developer Account
a PC, if you wish to develop using Visual Studio
If you're coming from the Windows side of the world, you may be a little upset by the fact that you need a Mac and Xcode. The reality is that, no matter what tools and languages you use to create your iOS application, only Xcode has the capabilities to create the final iOS distributable (.ipa) and Xcode only runs on a Mac. Believe me, the sooner you accept this, the sooner you'll be enjoying the rest of the process.
2. Installation
If you followed along with the first part of this tutorial, then you probably already have a fairly good grasp on the Xamarin installation process. The steps involved to install Xamarin.iOS are similar to those of Xamarin.Android. But if you haven't read the previous tutorial, then the following section will cover all the steps involved.
Step 1: Xcode and the iOS SDK
Regardless of what platform you intend to do your development on, you are going to need to have the latest version of both Xcode and the iOS SDK. The primary reason you will need Xcode is for the build process.
First head over to the iOS Dev Center and create an account if you don't already have one. You can bypass this step and go straight to the Mac App Store to download Xcode, but if you plan on running your app on an physical device then you're going to need a developer account anyway, so you might as well do it now.
Once created, log on and navigate to the Downloads page to get the latest version of Xcode. It will take you to the Mac App Store to complete the download and installation process. This will get you not only the latest version of Xcode, but it will also download and install the latest version of the iOS SDK. Sweet.
This tutorial will not go into detail on the provisioning process and deploying your application to a device. There are other articles on Tuts+ that cover that topic as well as documentation on the Xamarin website if you wish to do that.
Step 2: Xamarin.iOS and Xamarin Studio
You can kill two birds with one stone by heading over to the Xamarin Download page, creating an account if you don't already have one, and clicking Download Xamarin. This will download the Xamarin installer that will take care of all the prerequisite checking, downloading, and installing for you. Double-click the .dmg file to start the installation.
Once the installer begins, you can select the pieces that you'd like to install. You will only need the Xamarin.iOS option for this tutorial, but feel free to install as much or as little as you'd like.
This screen may look a little different on your machine depending on the OS you are running as well as what products you may or may not already have installed. As mentioned before, you will still need to complete this process on a Mac if you are doing your development on a PC. Part of the installation of Xamarin.iOS is the Xamarin Build Host that allows you to connect to your Mac over the network from a PC and create the .ipa package that runs on the iOS Simulator or a physical device.
Once all the packages have been downloaded and installed, if you are doing your development on a Mac, you can start up Xamarin Studio. If you are going to do your development on a PC, then you'll need to follow the same installation process to get all the necessary Xamarin.iOS bits as well as the Xamarin plugin for Visual Studio on your Windows machine.
To do this though, you will need to have at least the Business Edition of Xamarin. You can get everything you need through the 30-day free trial of Xamarin if you don't already have it. If you don't have access to the free trial or the full software, you will need to use Xamarin Studio on your Mac.
3. Building A Simple Application
The best way to truly learn a new technology of any sort is to dig in and create something from scratch. You can build this application in either IDE (Integrated Development Environment), all you need to do is follow along with the sample code.
In the modern age of iOS development, you have three options when it comes to creating your application's user interface.
create individual views and link them together in code
use Storyboards, which is a more graphical version of the first option
create the user interface in code
While the first and second options are the more popular options, we're going to create the sample application using the third option. It's important to not only understand how to do it, but also to understand why the graphical tools were created.
Step 1: Using Visual Studio on Windows
If you're using Visual Studio on a Windows machine to follow this tutorial, when you start the process of creating the project you will run into a dialog asking you to connect to a XamarinBuild Host. This is a fairly straightforward process where you only need to follow the directions on the screens. It will look something like this.
The first dialog you'll see is an instructional window that describes how to start the Xamarin Build Host on your Mac using Spotlight.
On your Mac, open the Xamarin Build Host and click the Pair button. This will provide you with a PIN.
Switch back to Visual Studio and click the Continuebutton. If your Mac is configured correctly, it should show up in the list as a possible Xamarin Build Hosts.
Click your Xamarin Build Host system of choice and choose Connect.
Visual Studio will then ask for the PIN. Once you've entered the PIN and paired Visual Studio with your Xamarin Build Host, you will be able to follow along with the rest of this tutorial, not only writing an iOS application in C#, but also doing it using Visual Studio. Awesome.
If you ever need to connect this particular Xamarin Build Host to another system, you can click the Unpair button. After doing this, you will have to repeat the process for the new system.
Luckily, Visual Studio will remember the Xamarin Build Host you previously connected with. If you unpairVisual Studio from the Xamarin Build Host, the next time you try to write an iOS application in Visual Studio, it will ask for the PIN for the same build host. To search for another host in Visual Studio, click Options from the Tools menu and choose Xamarin > iOS Settings. There you'll find a button that says Find Mac Build Host. This will show the dialog to select a different Xamarin Build Host.
Step 2: Creating the Project
Start by opening your IDE of choice and selecting File > New > Solution or Project depending on the IDE you're using. From the New Solution dialog box, choose C# > iOS > iPhone from the tree view and select Empty Project as the template. This will give you the basic structure for your application without all the bells and whistles getting in your way. This is what it will look like in Xamarin Studio 5.0.
You can give your solution any name you want, but if you're interested in following along with me, then name it Feeder. Once the solution/project structure is created, you'll see a number of files that are worth zooming in on:
AppDelegate.cs
Entitlements.plist
Info.plist
Main.cs
AppDelegate.cs
In the world of iOS, the AppDelegate is the conduit to your application from the device. It is used to handle any system events that are necessary. The application delegate also keeps a reference to the window object. Each iOS application has a window, an instance of the UIWindow class, that is used to draw the user interface of the application. The AppDelegate is responsible for subscribing to any system events pertaining to your application, for example, when the application finishes launching or when it is being terminated by the operating system.
Entitlements.plist
This file is similar to the permissions section of the AndroidManifest. It specifies the permissions that the application has as well as the technologies it is allowed to use. Some of the more common technologies include iCloud, PassKit, Push Notifications, etc. You can think of a plist or property list file as a dictionary of key-value pairs that store properties used by your application.
Info.plist
Similar to the Entitlements.plist file, the Info.plist file stores key-value pairs. The difference is that this file stores application information such as the application name, icons, launch images, and more.
Main.cs
This file contains the main entry point for your application. The Main method creates a new Xamarin.iOS application and specifies the AppDelegate that will handle the events sent by the operating system.
Step 3: Creating the Model
The first step in creating the sample application is having an object that stores the information you want to display to the user. In this case, you want to store information about articles that appear on the Xamarin RSS blog feed. You can store more data than the example, but this will get you started. First, create a new class and name it RssItem. The definition of the class should look like this:
public class RssItem
{
public string Title { get; set; }
public string Creator { get; set; }
public DateTime PubDate { get; set; }
public string Link { get; set; }
}
The class is fairly straightforward:
Title, a string representing the title of the article
Creator, a string representing the author of the article
PubDate, a DateTime representing the article's publication date
Link, a string representing the a direct link to the article
With this simple model set, we can now shift our focus to the application's user interface and core implementation.
Step 4: Model-View-Controller
When creating iOS applications, you have no choice but to follow the Model-View-Controller paradigm. Even if you don't understand what that is, by the end of the process you will be an MVC soldier plugging away at iOS applications without even thinking about it. At a high level, the MVC pattern is made up of, you guessed it, three parts:
Model
You can think of the Model in the MVC pattern as the main components (or classes) in your application that contain important business data/logic. In your case, the model is the RssItem class that you just created.
View
The View in your application is the actual visual representation of data (or your model) on the device. This may come in the form of a list of data or some custom components that represent the data found in your Model.
In this example, the view layer is going to consist of a list of RssItem objects that have been downloaded from the aforementioned feed. Ideally, the Model and the View are not aware of each other and shouldn't interact directly. The two pieces of the puzzle need to be held together with some sort of glue.
Controller
The glue that ties the Model and View together, is the Controller. In the world of iOS development you will typically see a controller in the form of a ViewController class or subclass. This class has the job of controlling the interaction between the Model and View. The interaction can come in the form of the user touching some piece of the View and updating the Model based on that interaction or some piece of the Model being updated by another process behind the scenes and updating the View based on that change.
To implement the MVC pattern in your application, you need to create a View and a Controller. Add a new item to your project by right-clicking on your project and selecting Add > New File (or Item depending on your IDE). In the New File Dialog, you'll need to select the iOS group and the iPhone View Controller as the type and give it a name of FeedItem.
This process is going to add thee new files to your project. All of these files serve different purposes, but, together, they're going to build your list view that's going to present the articles of the Xamarin blog to the user.
FeedItemCell.cs
The FeedItemCell is a class that describes the individuals cells (or rows) within your list view. This class will allow you to modify the look and layout as well as the functionality of all the cells in the list to give it a custom appearance.
FeedItemSource.cs
The source of data that is visualized in your list of FeedItemCell objects comes in the form of the FeedItemSource class. This source class not only contains the data that will be visualized in your list, but also contains information about the list including its groupings, headers, footers, and item counts. It also handles the interactions with the items when a user touches one of them.
FeedItemController.cs
Once again, the actual controller is the glue that binds all things together. The FeedItemController class is the container class that will create the list view for the user to actual see on the screen. Within this class you will need to get the appropriate data to show on the screen, initialize a new FeedItemSource with that data, and pass the source to the controller.
Step 5: Getting Down to Code
Now that you have all the pieces of the puzzle ready, it's time to put them together. Let's start to work through the three files that you just created and get them ready for our data. First, take a look at the FeedItemCell class and modify it to look like this.
namespace Feeder
{
public class FeedItemCell : UITableViewCell
{
public static readonly NSString Key = new NSString ("FeedItemCell");
public FeedItemCell () : base (UITableViewCellStyle.Subtitle, Key)
{
// TODO: add subviews to the ContentView, set various colors, etc.
TextLabel.Text = "TextLabel";
}
}
}
There's not a lot going on in this class and there is only a small change that you'll be making. This class is going to inherit from UITableViewCell. All this class contains is a constructor that calls the base constructor passing in two pieces of data.
The first is the style of the cell. In this example, we use a built-in style know as the Subtitle style. This style allows for two text fields in the cell, one on top of the other.
The second parameter of the base constructor is the key that will represent this type of cell within the list. In this case, every cell within the list will be referred to by the FeedItemCell key.
The next piece of the puzzle is the FeedItemSource class. Replace the contents of the default implementation with the following:
namespace Feeder
{
public class FeedItemSource : UITableViewSource
{
private List<RssItem> _items;
public FeedItemSource (List<RssItem> items)
{
_items = items;
}
public override int NumberOfSections (UITableView tableView)
{
// TODO: return the actual number of sections
return 1;
}
public override int RowsInSection (UITableView tableview, int section)
{
// TODO: return the actual number of items in the section
return _items.Count;
}
public override UITableViewCell GetCell (UITableView tableView, NSIndexPath indexPath)
{
var cell = tableView.DequeueReusableCell (FeedItemCell.Key) as FeedItemCell;
if (cell == null)
cell = new FeedItemCell ();
// TODO: populate the cell with the appropriate data based on the indexPath
cell.TextLabel.Text = _items[indexPath.Row].Title;
cell.DetailTextLabel.Text = string.Format ("{0} on {1}", _items [indexPath.Row].Creator, _items [indexPath.Row].PubDate);
return cell;
}
public override void RowSelected (UITableView tableView, NSIndexPath indexPath)
{
var item = _items [indexPath.Row];
var url = new NSUrl (item.Link);
UIApplication.SharedApplication.OpenUrl (url);
}
}
}
Let's break it down to get a better understanding of what's happening. The source needs the data that will be displayed in the list and that is typically handled in the constructor.
The source data in your list is going to be provided by a list of your model classes, RssItem. This list of items is passed into the constructor of the FeedItemSource class and held on to in a private variable _items.
public override int NumberOfSections (UITableView tableView)
{
// TODO: return the actual number of sections
return 1;
}
When using lists in an iOS application, you have the option to group cells into sections. The NumberOfSections method returns how many sections or groups are found within the list. In this particular application, there is a single group that contains all the items, which means the method returns 1.
public override int RowsInSection (UITableView tableview, int section)
{
// TODO: return the actual number of items in the section
return _items.Count;
}
With the number of sections of the list defined, the table view needs to know how many items are found in each section. You've already passed the list of RssItem objects that are going to appear in the list into the constructor and saved it into the private variable _items, so all you need to do is return _items.Count.
public override UITableViewCell GetCell (UITableView tableView, NSIndexPath indexPath)
{
var cell = tableView.DequeueReusableCell (FeedItemCell.Key) as FeedItemCell;
if (cell == null)
cell = new FeedItemCell ();
// TODO: populate the cell with the appropriate data based on the indexPath
cell.TextLabel.Text = _items[indexPath.Row].Title;
cell.DetailTextLabel.Text = string.Format ("{0} on {1}", _items [indexPath.Row].Creator, _items [indexPath.Row].PubDate);
return cell;
}
The next, and arguably the most important, part of the source implementation is the GetCell method. The purpose of this method is to produce and reuse the cells that are present in the list.
var cell = tableView.DequeueReusableCell (FeedItemCell.Key) as FeedItemCell;
if (cell == null)
cell = new FeedItemCell ();
The first line calls the DequeueReusableCell method passing in an argument of the Key of a cell that it's looking for. One of the ideas behind a list is that if the source data contains more items than can fit in the viewable section of the screen, there is no reason to continually create those cells and take up system resources.
Instead, when a cell goes off-screen, it isn't simply discarded. It's placed in a pool of other cells for later use. Later, when a cell with a particular key is needed, the system first checks the pool of reusable cells for cells with that key. If no reusable cell could be found, the cell variable is null, and a new FeedItemCell is created.
If a cell is ready to be used, it needs to be populated with data. How you do this is completely up to you. In our example, we specified that each cell is of the Subtitle type, which means that it has two labels. The top Label is referred to as the TextLabel. In our example, it's populated with the Title property of an RssItem object. To fetch the correct RssItem object, we make use of the indexPath.Row property. The bottom Label is referred to as the DetailTextLabel and is populated with a concatenation of the Creator and PubDate properties of the corresponding RssItem object.
public override void RowSelected (UITableView tableView, NSIndexPath indexPath)
{
var item = _items [indexPath.Row];
var url = new NSUrl (item.Link);
UIApplication.SharedApplication.OpenUrl (url);
}
The final override method within the FeedItemSource class is RowSelected. This method is called every time a cell within the list is tapped by the user. In this case, when a user touches a cell, you fetch the corresponding RssItem instance using the indexPath.Row property. You then create a new NSUrl object with the Link property of the RssItem object and pass that NSUrl to the UIApplication.SharedApplication.OpenUrl method. This method will determine which application on the device or emulator is best suitable to handle the url. In our example, because the url represents a web address, the built-in browser of the device or the simulator will handle the request.
It's time to turn our attention to the FeedItemController class.
namespace Feeder
{
public class FeedItemController : UITableViewController
{
private List<RssItem> _items;
public FeedItemController () : base ()
{
using (var client = new HttpClient ()) {
var xmlFeed = client.GetStringAsync ("http://blog.xamarin.com/feed").Result;
var doc = XDocument.Parse (xmlFeed);
XNamespace dc = "http://purl.org/dc/elements/1.1/";
_items = (from item in doc.Descendants ("item")
select new RssItem {
Title = item.Element ("title").Value,
PubDate = DateTime.Parse (item.Element ("pubDate").Value),
Creator = item.Element (dc + "creator").Value,
Link = item.Element ("link").Value
}).ToList();
}
}
public override void DidReceiveMemoryWarning ()
{
// Releases the view if it doesn't have a superview.
base.DidReceiveMemoryWarning ();
// Release any cached data, images, etc that aren't in use.
}
public async override void ViewDidLoad ()
{
base.ViewDidLoad ();
// Register the TableView's data source
TableView.Source = new FeedItemSource (_items);
}
}
}
Before you can successfully compile this code, you will need to add a reference to the System.Xml.Linq assembly. You can do this by right-clicking on References in your project and selecting either Add Reference or Edit References, depending on the IDE you're using. You will also need to add the using System.Xml.Linq; statement to the top of the class file.
private List<RssItem> _items;
public FeedItemController () : base ()
{
using (var client = new HttpClient ()) {
var xmlFeed = client.GetStringAsync ("http://blog.xamarin.com/feed").Result;
var doc = XDocument.Parse (xmlFeed);
XNamespace dc = "http://purl.org/dc/elements/1.1/";
_items = (from item in doc.Descendants ("item")
select new RssItem {
Title = item.Element ("title").Value,
PubDate = DateTime.Parse (item.Element ("pubDate").Value),
Creator = item.Element (dc + "creator").Value,
Link = item.Element ("link").Value
}).ToList();
}
}
This is where all the logic for retrieving the data from the Xamarin RSS blog feed lives. If you followed along in the Android version of the Introduction to Xamarin tutorial, this probably looks familiar. That's because it is the exact same code.
You start by creating an HttpClient and using the GetStringAsync method to download the data found at the supplied url and use the Parse method on the XDocument class to prepare the data for some Linq-to-Xml magic. Once you have the XDocument object, you can query it to get the values of all the child item elements found in the RSS feed and initialize instances of the RssItem class and save them into the private _items variable.
After the constructor, there are only two methods present in the implementation. Those methods are DidReceiveMemoryWarning and ViewDidLoad. You don't need to do anything with the first method, but like most things it pays to at least know what it's for.
The DidReceiveMemoryWarning method is called at any point within the execution of this class when the device or simulator has determined that your application may be taking up too much memory and could possibly be terminated. This is your opportunity to release some memory intensive resources to keep that from happening. As its name implies, the ViewDidLoad method is invoked when the view is loaded and before it's presented o the user.
public async override void ViewDidLoad ()
{
base.ViewDidLoad ();
// Register the TableView's data source
TableView.Source = new FeedItemSource (_items);
}
In this method, we call the base method of ViewDidLoad and create a new instance of the FeedItemSource class, assigning it to the TableView.Source property. Once this is done, the user will be able to see the data you retrieved from the RSS feed and placed in the list view. If you're confused where the TableView property comes from, it's inherited from the FeedItemController's base class, UITableViewController. This base class provides a reference to the actual table view that is in the view controller's view.
Step 6: Putting It All Together
You now have all the necessary pieces to present a list of articles to the user. The only problem is that none of it is showing up yet. The reason is that your application hasn't been told to use the FeedItemController to show the data to the user. To do this, you need to make a small modification to your AppDelegate class.
The AppDelegate class currently contains one method, FinishedLaunching. This method is called on the application delegate by the operating system. To make everything work, we need to make a slight modification to its implementation.
public override bool FinishedLaunching (UIApplication app, NSDictionary options)
{
// create a new window instance based on the screen size
window = new UIWindow (UIScreen.MainScreen.Bounds);
var controller = new FeedItemController ();
controller.View.BackgroundColor = UIColor.White;
controller.Title = "Xamarin Feeds";
var navController = new UINavigationController (controller);
window.RootViewController = navController;
// make the window visible
window.MakeKeyAndVisible ();
return true;
}
The first four lines are pretty standard. You create a new instance of the UIWindow class, which will contain your application's user interface. You then create a new instance of the FeedItemController class and set it's BackgroundColor property to UIColor.White and give it a Title.
The next two lines may seem a little confusing. You create a new instance of a UINavigationController, pass to it the FeedItemController instance in the constructor, set the RootViewController property of the window object to the UINavigationController, and call MakeKeyAndVisible. Why do we need to go through this hassle? Why can't we set the RootViewController to the FeedItemController and call it a day? You can do that and your application will still work. However, if you do that, the status bar at the top of the screen will show up on top of your list and will look bad. Adding your controller to a UINavigationController is a little trick that accomplishes two things:
it adds space between the top of your control and the top of the screen
it makes the Title property of the controller visible
It's time to build and run your application in the iOS Simulator. The result should look similar to the screenshot below.
Conclusion
And there you have it. You have just successfully created a fully functional iOS application using nothing but C# and Xamarin. That's a pretty impressive accomplishment if you think about it.
I hope this gives you the confidence and drive to dive deeper into the realm of Xamarin and the doors that it opens for you. From here you can learn about creating Android applications in C# using Xamarin if you haven't already. If you have, you can explore how you can create cross-platform applications that can reuse the majority of a single code base and run on both iOS and Android devices. That's what I'll show you in the next tutorial.
Google I/O is without a doubt one of the highlights of the year, for
myself as well as many other developers. The event is always sure to inspire and surprise, and, on occasions,
it provides a glimpse into the near future and the potential of that
point in time. This year was big, possibly the biggest to date, and the volume
of announcements was staggering.
Android L
Android is getting a major overhaul with its next version,
which is currently being referred to asAndroid L. The new version will tout more than 5,000 new APIs, multiple
new features, a new runtime engine, and a major user interface makeover.
Android L was very much a preview, like a lot of announcements during the
keynote. The release date is unknown, but it will appear some time in the fall.
The Android SDK has been released and, for the first time, you will be able to preview
the new version on a Nexus 5 or 7 device.
Material Design
Android L will be sporting a brand, new user interface that Google refers to as Material Design. It's sleek, fresh, and it has a modern look to it. It's everything you'd expect from a user interface update. However, Material Design has lot more depth to it, which literally is one of the key concepts of Material design. You can specify a view's elevation, allowing you to raise
user interface elements and cast dynamic shadows.
Material Design comes with a light and a dark theme. Developers can customize a theme and color changes can be made to areas like
the status bar, the navigation bar, and various other user interface elements. Material Design gives more options
for styling an application's user interface and showing brand identity.
Widgets
Android L also includes two new user interface widgets. RecyclerView is a
more advanced and flexible version of the ListView class
and is ideal for listing elements that change dynamically. The RecyclerView class provides a layout manager for item positioning and default
animations.
The CardView class extends FrameLayout, which you may already be familiar with. CardView will let
you place information inside cards that can be easily modified. The
background color, corner radius, and elevation of a card can all be changed.
Animations
Animations are an important element of Material Design. The material theme includes default animations, which can be easily modified. A new set of APIs also let you create custom animations. The new animation
APIs let you:
respond to touch events with touch feedback
animations
hide and show views with reveal effect
animations
switch between activities with custom activity
transition animations
create more natural animations with curved
motion
animate changes in one or more view properties
with view state change animations
show animations in state list drawables between
view state changes
Notifications
Notifications have been further improved and will now show
up on the lock screen. In addition to this, there's a heads-up notification that can appear at the top of a full-screen application and is dismissible
with a simple swipe.
Runtime
The Dalvik
runtime will be replaced with ART,
which has plenty of new features to justify a changing of the guard.
Project Volta is a part of Android L and is dedicated to improving
battery performance. It is comprised of three parts:
Battery Historian is a new tool designed to measure
battery discharge by visualizing an application's power consumption.
The goal of the Android Job
Scheduler is to improve an application's power consumption. One of the tasks of the Android Job Scheduler API is scheduling maintenance tasks while the
device charging.
Battery Saver mode can be used to clock down the CPU, decrease the screen's refresh, or turn off background data. Battery Saver mode can be
triggered manually or set to start automatically when the battery reaches a
certain level.
Recent Apps Screen
Android's Recent Apps interface was previously limited to a
single instance of an app. In Android L, developers will be able to mark an activity within your app and have it treated as a new task by the Recent Apps screen.
Android L will include many of other improvements, including
graphical improvements and updates to WebView. More information can be found at the Android Developer website.
Integration
The coming together of Android and Chrome has been talked
about for some time. Google I/O showed off a number of ways in which this will start to happen.
Polymer
Polymer allows
for web components to be built and placed in HTML. It takes an object approach
to web development and allows for extremely complex and flexible objects to be
easily imported to any web application. Objects can be placed at the DOM level with a simple element tag.
Polymer isn't
new, it was featured during Google I/O 2013. However, the importance of Polymer in Google's long-term vision was shown during this year's event. To create a consistent user experience across native apps and the web a set of Material Design elements have been created
for Polymer. New design guidelines for the web and native apps have been released to help developers to create a consistent experience. Both websites and mobile apps will be able to utilize the same animations, themes, and widgets. You can read more about this topic at the Google Design website.
Mobile Web Experience
Chrome has
received improvements to further bridge the gap between native apps and the web.
Search results in Google will also benefit form Material Design and Chrome tabs will show as individual activities
in the Recent Apps screen. This will greatly improve multitasking and switching between mobile and web apps.
App indexing has been improved. Information from Google
search results can now be opened by an appropriate app, given the right
scenario. For example, a search for restaurants will return results in the
browser, but you will also have the option to view the results in, for example, OpenTable for Android.
Chromebooks
Any remaining doubts or questions about Chromebooks have been put to rest during this year's Google I/O. It's clear Chromebooks are a hit, with the top ten rated laptops on Amazon all being Chromebooks.
Further improvements have been made to Chromebooks to
integrate them more with Android. These include incoming calls and alerts from your phone appearing on your Chromebook and unlocking your Chromebook by
having your phone in proximity.
The most significant announcement, however, is that Android apps will be able to run on Chromebooks. During the keynote, Evernote for Android was showcased running on a Chromebook. The popular Vine for Android application was able to use the Chromebook's camera to create a video. This is
very exciting and looks promising for the future.
Android everywhere
Android is set to be coming to a screen near you, any screen.
Android Wear
Android Wear was
officially released. Android Wear continues the Android experience onto your
wrist and into a smaller form factor. It pairs with any smartphone running 4.3 or higher, and can receive and show notifications from your Android apps.
Applications can also show content on the Wear
device. Google Now cards are a great example, showing you information as and when it becomes relevant.
Much like the Recent Apps screen on phones and tablets, content can be
navigated by swiping up or down, and dismissed by swiping left or right. Android Wear also takes
advantage of Google voice search and allows you to perform searches or issue
commands.
Two devices went on pre-order after the event. The LG G
Watch will ship on July 2 and will cost $229. The Samsung Gear Live will ship on July 7 and will cost $199.
Both devices have a similar form factor and comparable
specifications. They run on a 1.2GHz CPU, have 4GB of storage, and sport 512MB of RAM.
Motorola will also be releasing a Wear device later in this year, the Moto 360, which will be the first smartwatch to feature a circular watch face.
Android Wear seems to be a solid platform, or an extension to Android, and will drastically reduce the number of times you
need to take your phone out of your pocket.
Android Auto
Android Auto will bring some of Google's best features to your car. It strips down
the Android experience and takes the most relevant aspects to driving, optimizing them for a dashboard user experience.
Google Voice Search is an important part of this experience. It can be initiated from the push of a button located near the steering wheel, allowing
for commands to be issued hands-free. This should create a more familiar, intelligent, and safer
drive.
Android TV
Android TV did another lap of the track this year with a
host of improvements. Android TV will now be treated like any other Android
screen with a tailored user interface.
It uses the same APIs as your phone and tablet, making it easy to interact with. A single APK should be able to run across devices, including compatible television sets.
It will also include Google Voice Search, which lends itself
well to sitting back on the sofa. Google TV will come with full Chromecast support.
Gaming is also getting a
big push for Android TV, with many games already announced as being compatible
on launch and the ability for multiplayer games to be played across devices.
Android TV is available for manufactures to embed into their television sets and also as set-top boxes. Google Play will be opening its doors to
Android TV in the fall.
Chromecast
Chromecast continued to add new features. You can now use Chromecast
when the casting and receiving devices are not on the same Wi-Fi network. The
content will be sent via the cloud and a pin authentication system will be
used for security.
You will soon be able to cast the entire screen of your Android device, enabling you to show anything, regardless of the app you're running. This will initially be limited to a selection of devices.
Google at Work
Android continues its move into every part of your life and
the workplace is no exception. Android devices will soon be able to isolate personal and
work data, allowing for the same device to be used for both purposes.
Enterprises will also be able to bulk deploy apps with a certified Android for Work program coming in the fall.
Google Apps has not been left out either. Google Docs will
now natively be able to edit Office documents. There is also a new Google Drive for work service being launched. It
improves management, encryption, reporting, and auditing. It will come
with unlimited storage and will cost $10 per user per month.
Cloud Services
Google Cloud Services
saw new features with new debugging, tracing, and monitoring
tools.
Google also announced Cloud Save, which will allow
for data to be saved to the cloud with just a few lines of code from within
your Android application, no backend setup required. Data stored in Cloud Save can be
retrieved at any point on any device, or it can be used by another service, such as Google BigQuery.
Google Fit
With the trending of health and fitness, technology and
apps, it was no surprise to hear that Google had something in store related to fitness.
The new Google Fit
platform will allow developers to track fitness data, synchronize it
across devices, and store it in a central location. A few major brands, such as
Nike and Adidas, have already confirmed that they'll be using the service.
Conclusion
This year really left me with a lot to take in. The future
of Android and Google looks bright and the rest of the year will be busy. I'm very excited about Polymer. The idea of easily having
themes running through web sites and mobile apps is very appealing and reusable components are the icing on the cake.
With all that was announced, I did notice the omission of
anything relating to a smart home and Google Glass. It is odd that Google made no mention of last year's biggest announcement.