Logging is one of the most useful instruments to inspect, understand, and debug iOS and OS X applications. You are probably familiar with the NSLog function provided by the Foundation framework, but have you ever felt the need for something more powerful? CocoaLumberjack is an open source library created and maintained by Robbie Hanson. CocoaLumberjack takes logging to a whole new level, and in this tutorial I will show you how to setup and use CocoaLumberjack in an iOS application.
Logging? Who Needs Logging?
Logging diagnostic information to a console, file, or remote server is widely used in almost any type of software development. It is one of the simplest forms of debugging, which is probably why it is so widespread. It is the first tool that I use when I am debugging or trying to understand a complex piece of logic regardless of the language. It is easy, fast, and comes with very little overhead.
Why should you use CocoaLumberjack if all it does is send pieces of data to the console or a file? One reason is that CocoaLumberjack is (mostly) faster than the NSLog function that the Foundation framework provides us with. Thanks to a number of convenient macros provided by CocoaLumberjack, switching from NSLog to CocoaLumberjack is as easy as replacing your NSLog with DDLog statements.
Another benefit of CocoaLumberjack is that one log statement can be sent to multiple loggers (console, file, remote database, etc.). You can configure CocoaLumberjack in such a way that it behaves differently depending on the build configuration (Debug, Release, etc.). There is much more that CocoaLumberjack can do for you so let me show you how to get started with this nifty library.
Step 1: Setting Up CocoaLumberjack
Create a new project in Xcode by selecting the Single View Application template from the list of available templates (figure 1). Name your application Logging, enter a company identifier, set iPhone for the device family, and then check Use Automatic Reference Counting. The rest of the checkboxes can be left unchecked for this project (figure 2). Tell Xcode where you want to save the project and hit the Create button.
Adding the CocoaLumberjack library to your project is as easy as downloading the latest version from GitHub, extracting the archive, and dragging the folder named Lumberjack into your project. The core files are DDLog.h/.m, DDASLLogger.h/.m, DDTTYLogger.h/.m, and DDFileLogger.h/.m. The other files in the folder are stubs for more advanced uses of CocoaLumberjack, which I won’t cover in this tutorial. You can ignore or delete these files.
If you take a peak inside DDLog.h and DDLog.m, you may be surprised by the number of lines of code in these files. As I said, CocoaLumberjack has a lot of really useful features. CocoaLumberjack is more powerful than NSLog because it takes advantage of multi-threading, Grand Central Dispatch, and the power of the Objective-C runtime.
You will also notice that there are a surprising number of macros defined in DDLog.h. We won’t use the majority of these macros. The macros that we will use in this tutorial are DDLogError, DDLogWarn, DDLogInfo, and DDLogVerbose. They all perform the same task, but each macro is associated with a log level. I will talk more about log levels in a few moments.
Before we start using CocoaLumberjack, it is a good idea to add an import statement to the project’s precompiled header file. Open Logging-Prefix.pch and add an import statement for DDLog.h. This ensure that the macros defined in DDLog.h are available throughout the project.
#import <Availability.h>
#ifndef __IPHONE_4_0
#warning "This project uses features only available in iOS SDK 4.0 and later."
#endif
#ifdef __OBJC__
#import <UIKit/UIKit.h>
#import <Foundation/Foundation.h>
#import "DDLog.h"
#endif
Step 2: Adding a Logger
Configuring CocoaLumberjack is easy. First, however, we need to import several classes of the CocoaLumberjack library. At the top of MTAppDelegate.m, add an import statement for DDASLLogger.h, DDTTYLogger.h, and DDFileLogger.h (see below). The first two classes are in charge of sending log messages to the Console application (Console.app) and Xcode’s Console. The DDFileLogger class takes care of writing log messages to a file on disk.
In the application delegate’s application:didFinishLaunchingWithOptions: method, we add two loggers as shown below. Both DDASLLogger and DDTTYLogger are singletons as you may have noticed. With this setup, we mimic the behavior of the NSLog function, that is, log messages are sent to the Console application (Console.app) and Xcode’s Console.
This is all that we have to do to get started with CocoaLumberjack. You can test this out by adding the following log statements to the viewDidLoad method of the MTViewController class. Build and run the project in the iOS Simulator to see if everything works as expected.
- (void)viewDidLoad {
[super viewDidLoad];
DDLogError(@"This is an error.");
DDLogWarn(@"This is a warning.");
DDLogInfo(@"This is just a message.");
DDLogVerbose(@"This is a verbose message.");
}
Did you also run into a compiler error? The compiler error reads Use of undeclared identifier ‘ddLogLevel’. It seems that we need to declare ddLogLevel before we can make use of CocoaLumberjack. This is actually a feature of CocoaLumberjack. By declaring and dynamically assigning a value to ddLogLevel we can configure CocoaLumberjack in such a way that log statements are executed based on the build configuration. To understand what I mean, amend the precompiled header file of our project (Logging-Prefix.pch) as shown below.
#import <Availability.h>
#ifndef __IPHONE_4_0
#warning "This project uses features only available in iOS SDK 4.0 and later."
#endif
#ifdef __OBJC__
#import <UIKit/UIKit.h>
#import <Foundation/Foundation.h>
#import "DDLog.h"
#endif
#ifdef DEBUG
static const int ddLogLevel = LOG_LEVEL_VERBOSE;
#else
static const int ddLogLevel = LOG_LEVEL_ERROR;
#endif
By default, CocoaLumberjack defines four log levels, (1) error, (2) warning, (3) info, and (4) verbose. Defining log levels is very common in logging libraries (e.g., log4j and log4php). By assigning a log level to a log statement, it can be categorized, which is very useful as you will see in a moment. In the precompiled header file, we declare ddLogLevel and assign a value to it. The value of ddLogLevel determines which log statements are executed and which are ignored. In other words, if the build configuration is equal to Debug (read: if the preprocessor macro DEBUG is defined), then ddLogLevel is equal to LOG_LEVEL_VERBOSE, the highest log level. This means that every log statement will be executed. However, if the build configuration is not equal to Debug, then only log statements with a log level of error are executed. It is important to know that the log levels are ordered as you can see in DDLog.h where they are defined.
Why is this useful? This provides a very easy mechanism to control what is being logged based on the build configuration. You can try this out by changing the current active scheme in Xcode. Stop the application and click the active scheme named Logging on the right of the stop button (figure 3). Select Edit Scheme… from the menu and click Run Logging on the left (figure 4). Under the Info tab, set the Build Configuration to Release (figure 4). With this option, you select the build configuration that Xcode should use when the application runs in the iOS Simulator.
If you now build and run your project in the iOS Simulator, you should only see log statements with a log level of error printed to Xcode’s Console. All log statements with a log level higher than error are ignored. Keep in mind that the DEBUG preprocessor macro is named CONFIGURATION_DEBUG in Xcode 3. You can read more about this on the CocoaLumberjack’s Wiki.
Step 3: Logging to a File
Logging to a file is a piece of cake with CocoaLumberjack. Not only is it easy to set up, CocoaLumberjack comes with a number of useful options, such as limiting the file size of log files and setting a rolling frequency. You can even tell CocoaLumberjack to remove old log files as new log files are created. Let me show you how this works.
Revisit the application delegate’s application:didFinishLaunchingWithOptions: method and update its implementation as shown below. After initializing an instance of DDFileLogger, we configure it by (1) setting the maximum file size of each log file (in bytes), (2) setting the rolling frequency to 24 hours, and (3) setting the maximum number of log files that should be kept to seven. Don’t forget to add the file logger as we did earlier.
Before you build and run the project, open the Finder and browse to the following location, ~/Library/Application Support/iPhone Simulator//Applications//Library/Caches/. As you can see, the path might be slightly different depending on which version of the iOS Simulator you are using. Run the application in the iOS Simulator and inspect the contents of the Caches directory. It should now have a folder named Logs containing one text file named log-XXXXXX.txt. The last six characters of the file name are unique to prevent log files from being overwritten. It is possible to specify the location where the log files are stored. Keep in mind that he Caches directory can be emptied by the operating system at any time. If you want to store your application’s log files in a safer location, then I suggest storing them in the application’s Documents directory.
Bonus: Colors
Even though colors seem like nothing more than eye candy, every developer knows how important colors are when working in a code editor. With CocoaLumberjack, you can add color to Xcode’s Console. Robbie Hanson, the creator of CocoaLumberjack, also contributed to an Xcode plugin named Xcode Colors. CocoaLumberjack works very well with Xcode Colors. Download the latest version of Xcode Colors, extract the archive, and put its contents in Xcode’s plug-ins folder (located at ~/Library/Application Support/Developer/Shared/Xcode/Plug-ins/), and restart Xcode. Note that it might be necessary to manually create the plug-ins folder if it isn’t present.
To enable colors in Xcode’s Console, head back to the application:didFinishLaunchingWithOptions: method and tell the shared instance of the TTYLogger class to enable colors (see below). CocoaLumberjack uses default colors if you don’t specify a color for a specific log level. Overriding the default color settings is easy as shown below. Run the application in the iOS Simulator and inspect Xcode’s Console window to see the result (figure 5).
I already mentioned that CocoaLumberjack defines four log levels by default. It is possible, however, to define custom log levels. I won’t discuss custom log levels in this tutorial, but if you want to know more about this feature, then I suggest that you read the article about custom log levels on CocoaLumberjack’s Wiki.
Combining colors with custom log levels results in a very powerful tool to collect data and debug an application. Keep in mind that CocoaLumberjack has a lot more to offer than what I have shown in this short tutorial. With CocoaLumberjack, you can create custom loggers as well as custom formatters. Custom loggers are especially useful if you want to log to a database or send log files to a remote server at regular time intervals. CocoaLumberjack really is a powerful library that has become an indispensable tool in my toolbox.
Conclusion
Logging application data and diagnostic information to the console or a file can be very useful when debugging problems both during development and production. Having a solid logging solution in place is therefore essential. Along with many other developers, I have created custom logging solutions for many projects, but CocoaLumberjack is an ideal replacement and it has a lot more to offer.
Rockable Press is proud to present our latest release: Decoding the iOS 6 SDK. Written by five seasoned iOS experts and packed with almost 500 pages of essential iOS 6 development fundamentals, this great new eBook will quickly get you up to speed with the iOS 6 SDK and all the fundamental changes that occurred to Xcode and the iOS device landscape in 2012. Get your copy now!
Who This Book is For
Decoding the iOS 6 SDK is written for intermediate iOS developers who want to quickly get up to speed with the iOS 6 SDK and all the fundamental changes that occurred to Xcode and the iOS device landscape in 2012.
Beginning iOS developers who understand the fundamentals of the SDK and Xcode will also benefit from reading this work.
The book follows a non-linear format that allows the reader to decide how much time to spend on any given topic. This is accomplished by dividing each chapter into a “Theoretical Overview” section and a “Tutorial Project” section.
While it’s certainly possible to read the entire book from cover-to-cover (we think you’ll enjoy doing so!), it’s also possible to simply read the “Theoretical Overview” of each chapter to get a high-level understanding of the iOS 6 SDK changes, and then go back and focus in on the most relevant or interesting tutorial projects on a more selective basis.
Regardless of how you approach the book, expect to learn about the most essential aspects of developing with the iOS 6 SDK, Xcode, and all the iOS devices released from Cupertino in 2012.
Hear From an Author…
Hello, I’m Mark Hammonds, and I’m one of the authors of Decoding the iOS 6 SDK.
As I’m sure you know, many exciting new enhancements and changes were introduced with the release of iOS 6. These changes are all the more relevant as over 60% of iOS devices upgraded to iOS 6 within just the first month of release.
In addition to a new major iOS release, developers also saw the introduction of three new iOS devices, two of which, the iPhone 5 and the iPad mini, had completely new form-factors. The iOS development workflow received many new changes as well.
Xcode received 9 new releases in 2012, and the LLVM Compiler received updates that will fundamentally alter the way many of us write Objective-C code. For those who already work full-time in the industry, keeping pace with these changes can feel like a second job!
The purpose of this book is to provide one convenient resource for sharpening your iOS skills and keeping pace with all the latest advances from Cupertino.
The content is organized in such a fashion that you can quickly jump to the sections that matter the most to you, and that means you can spend less time simply reading about the iOS 6 SDK and more time actually building the next great iOS 6 app!
Start off by learning about all the most important changes to the iOS ecosystem and the iOS 6 SDK. The tutorial project from this chapter covers enhancements to Xcode, the LLVM Compiler, autorotation changes in iOS 6, writing backwards compatible code, and deploying to the iPhone 5, the iPad 3, and the iPad mini.
CHAPTER 2: UITABLEVIEW CHANGES & ENHANCEMENTS by Aron Bury
Table views are one of the most widely used components in iOS development. iOS 6 introduces several important changes to table views, including a paradigm shift for generating table view cells, the ability to provide header and footer content, and a new “pull-to-refresh” control from Apple.
iOS 6 introduces several important privacy enhancements that provide more control over the level of access apps are granted to user data. This chapter demonstrates how to interact with the address book, calendars, reminders, location data, and user media using the new API calls.
In combination with the release of Apple Maps, Map Kit also received a significant number of API improvements and changes. This chapter will get you up-to-speed while teaching you how to build a simple app for tracking cycling routes.
Event Kit will now allow developers to access user Reminders and set proximity or time- based alarms. Learn how to take advantage of these features and build a Dinner Reservations app.
Customizing an app’s interface is a great way to stand out in the App Store. This chapter will demonstrate the many new theming capabilities provided by UIKit in iOS 6!
Developing applications that will look great across the 3.5” iPhone, the 4” iPhone, the iPad, and the iPad mini can be a challenge. Fortunately, Apple’s new Auto Layout system makes this much easier to accomplish, and this chapter will introduce you to the fundamentals.
CHAPTER 8: ADVANCED AUTO LAYOUT TECHNIQUES by Akiel Khan
Picking up where chapter 7 left off, this chapter will dive into more advanced Auto Layout concepts, including the the new Visual Format Language (VFL) for managing layout constraints.
CHAPTER 9: COLLECTION VIEW FUNDAMENTALS by Bart Jacobs
With the rise of the iPad, many iOS developers have begun using advanced, grid-based layouts in their applications. The UICollectionView class makes doing so easier and more robust than ever, and collection views are sure to become a favorite tool among iOS developers.
CHAPTER 10: ADVANCED COLLECTION VIEW TECHNIQUES by Bart Jacobs
This chapter demonstrates the versatility and genius of collection views by showcasing several of the more advanced aspects of their architecture. The tutorial project in this chapter will teach you how to build a music playlist application that pulls content from the local device music library.
CHAPTER 11: SOCIAL FRAMEWORK FUNDAMENTALS by Bart Jacobs
The introduction of the Social Framework makes it easier than ever before to share application content on Twitter, Facebook, or Sina Weibo. This chapter will get you up to speed with the basics and teach you how to build a simple photo sharing app.
CHAPTER 12: ADVANCED SOCIAL FRAMEWORK TECHNIQUES by Bart Jacobs
This chapter expands on the principles introduced in Chapter 11 and demonstrates how to build a basic Twitter client using a combination of the Accounts Framework and the Social Framework.
CHAPTER 13: STATE PRESERVATION AND RESTORATION by Akiel Khan
This chapter will show you how to get the most out of the new state preservation and restoration capabilities of the SDK, making it easier than ever to convey a sense of continuity and efficiency within your apps!
One of the most promising new additions to iOS 6 is the Passbook application. This chapter will teach you how to build a demo snowboarding ticket that interacts with Passbook and the Pass Kit framework.
Styles and themes are time-saving ways to create a consistent look and feel across your Android application. In this tutorial, I’ll cover everything you need to know to apply these useful concepts to your own project. Starting with a quick introduction, this tutorial then demonstrates how to work with the styles and themes that are predefined by the Android platform. Finally, I will walk you through the process of creating your own, both from scratch and by using inheritance.
What Are Styles and Themes?
Styles and themes are essentially the same thing: a collection of properties. These properties can be anything from button color to the “wrap content” attribute or the size of your text. The crucial difference is how they’re applied to your project:
A style is applied to a View.
A theme is applied to individual activities or an entire application.
Why Should I Use Themes and Styles?
Implementing styles and themes has several benefits.
Efficiency: If you’re going to be using the same collection of attributes throughout your application, defining them in advance turns this:
Consistency: Defining a style or theme helps to ensure a consistent look and feel.
Flexibility: When you inevitably come to tweak your UI, you only need to touch the code in one location. This change is then automatically replicated, potentially across your entire application. Styles and themes give you the freedom to quickly and easily update your UI, without touching previously-tested code.
Step 1: Create a Custom Style
Defining and referencing custom styles in Android is similar to using string resources, so we’ll dive straight in and implement a custom style.
Open the res/values/styles.xml file. This is where you’ll define your styles.
Ensure the styles.xml file has opening and closing “resource” tags:
<resources>
</resources>
Give your style a unique identifier. In this tutorial, we’ll use “headline”:
<style name="headline">
Add your attributes and their values as a list of items.
<item name="android:textStyle">bold</item>
Once you’ve finished adding items, remember the closing tag:
</style>
This is the custom style we’ll use in this tutorial:
Tip: Note the missing ‘android:’ XML prefix. This prefix is omitted because the “headline” style isn’t defined in the android namespace.
Boot up the emulator and take a look at your custom style in action.
Step 3: Examine the Predefined Styles
You’ve seen how easy it is to define and apply a custom style, but the Android platform features plenty of predefined styles, too. You can access these by examining the Android source code.
Locate the Android SDK installed on your hard drive and follow the path: platforms/android/data/res/values
Locate the ‘styles.xml’ file inside this folder. This file contains the code for all of Android’s predefined styles.
Step 4: Apply a Default Style
Pick a style to apply. In this example, we’ll use the following:
Return to your layout file, and add the following to your View:
Experiment with other default styles to see what different effects can be achieved.
Step 5: Create a themes.xml File
Now that you’re familiar with custom and default styles, we’ll move onto themes. Themes are very similar to styles, but with a few important differences.
Before we apply and define some themes, it’s a good idea to create a dedicated themes.xml file in your project’s “Values” folder. This is especially important if you’re going to be using both styles and themes in your project.
To create this file:
Right-click on the “Values” folder.
Select “New”, followed by “Other”.
In the subsequent dialog, select the “Android XML Values File” option and click “Next”.
Enter the filename “themes” and select “Finish”.
Step 6: Apply a Default Theme
Unlike styles, themes are applied in the Android Manifest, not the layout file.
Pick a theme to work with by opening the ‘themes’ file in platforms/android/data/res/values and scrolling through the XML code. In this example, we’ll use the following:
Open the Android Manifest file. Depending on the desired scope of your theme, either apply it to:
Although you can define custom themes from scratch, it’s usually more efficient to extend one of the Android platform’s predefined themes using inheritance. By setting a “parent” theme, you implement all of the attributes of the predefined theme, but with the option of re-defining and adding attributes to quickly create a tailor-made theme.
In values/themes.xml, set the unique identifier as normal, but specify a “parent” theme to inherit from.
Open the Android Manifest and locate either the application or activity tag.
Apply your theme:
<application android:theme="@style/PinkTheme">
As always, check your work in the emulator:
Conclusion
In this tutorial, we covered using the Android platform’s predefined styles and themes as well as creating your own. If you want to learn more about what’s possible with themes and styles, the easiest way is to spend some time browsing through the corresponding files in the platforms/android/data/res/values folder. Not only will this give you an idea of the different effects that can be achieved, but it’s also a good way to find parents that can be extended using inheritance.
Mobile analytics provides a wide range of services to developers. Used with web and mobile analytics, Mixpanel is an established player. In this tutorial, I will demonstrate how Mixpanel sets itself apart from its competitors. I will show you how to get started with Mixpanel and how it can help you with customer retention and engagement, two critical aspects in the current mobile landscape.
Disclaimer
This tutorial is not sponsored or endorsed by Mixpanel. I have been using Mixpanel for quite some time now and I am very pleased with the results I have gotten so far. I have written this tutorial to demonstrate what Mixpanel can do for you and your business.
Learn from Your Customers
When used correctly, mobile analytics can be a valuable source of information. Mobile analytics can be much more than a tool to keep track of the number of active users of an application. If you aim to create an application that customers love and keep coming back to, then mobile analytics is indispensable.
Collecting data is not the only thing that Mixpanel can be used for. The data Mixpanel collects is carefully analyzed and presented in such a way that trends, patterns, and problems quickly reveal themselves. It provides calls to action to further improve and refine your product.
Mixpanel really shines when it comes to customer retention and engagement. Users can be followed over time by associating each user with a unique identifier. This gives you the data and the tools to study why customers stop using your product or why certain features remain underused.
Privacy
Without mobile analytics, it is virtually impossible to get a realistic view of your application’s customer base. The active user base of an application is an important metric, because it gives you a good indication of the application’s health. A declining user base is a clear indication that the product has some serious flaws that need to be fixed. A declining user base could be due to a usability problem, but it could just as well be a marketing problem – and it often is.
It is critical to mention privacy when discussing mobile analytics. Apple does not like when the privacy of its customers is violated and it has rejected countless applications for this exact reason. You can use Mixpanel without asking your users for any personal information. However, it is important to remember that the customer needs to be aware that you are collecting data, especially if the data contains personal information. It is critical to always respect the privacy of your customers.
In the rest of this tutorial, I will show you how to get started with Mixpanel by creating an account, integrating Mixpanel into an iOS project, and collecting data for analysis. Integrating Mixpanel into an iOS project is easy and Mixpanel API is very intuitive to use.
Step 1: Project Setup
Create a new project in Xcode by selecting the Single View Application template from the list of templates (figure 1). Name your application Analyzed, enter a company identifier, set iPhone for the device family, and check Use Automatic Reference Counting. The rest of the check-boxes can be left unchecked for this project (figure 2). Tell Xcode where you want to save the project and hit the Create button.
Figure 1
Figure 2
Step 2: Adding the Mixpanel Library
Download the latest version of the Mixpanel library for iOS at GitHub and extract the downloaded archive. After unzipping/extracting, look for the “Mixpanel” folder and import it into your Xcode project. When doing so, make sure to check the check-box labeled Copy items into destination group’s folder (if needed) and add the Mixpanel library to the Analyzed target (figure 3).
Figure 3
The Mixpanel library depends on the SystemConfiguration and CoreTelephony frameworks, so let’s link our project against these frameworks. Click the project in the Project Navigator on the left, select the target named Analyzed from the list of targets, open the Build Phases tab at the top, and expand the Link Binary With Libraries drawer. Click the small plus button to link your project against both frameworks (figure 4).
Figure 4
Unfortunately at the time of writing, the Mixpanel library doesn’t support ARC (Automatic Reference Counting). To remedy this, we must add a compiler flag to each of the files of the Mixpanel library. Click the project in the Project Navigator on the left, select the target named Analyzed from the list of targets, open the Build Phases tab at the top, and expand the Compile Sources drawer. Add a compiler flag with value -fno-objc-arc to each file of the Mixpanel library (figure 5).
Figure 5
Since we will be using Mixpanel throughout the project, it is a good idea to import the Mixpanel header file in the project’s pre-compiled header file (Analyzed-Prefix.pch) as shown below. This makes working with the Mixpanel library a little easier.
#import <Availability.h>
#ifndef __IPHONE_4_0
#warning "This project uses features only available in iOS SDK 4.0 and later."
#endif
#ifdef __OBJC__
#import <UIKit/UIKit.h>
#import <Foundation/Foundation.h>
#import "Mixpanel.h"
#endif
Step 3: Creating a Project in Mixpanel
Mixpanel is free for up to 25,000 data points, so I encourage you to create a new Mixpanel account and follow along with me. It takes less than two minutes to create a Mixpanel account (figure 6).
Figure 6
Once you are signed in, you need to create a project for your application. I recommend that you create a new project for each application. Click the button labeled My New Project in the top left, enter the name of your iOS application, and click the Create Project button (figure 7).
Figure 7
To start using Mixpanel in your iOS application, you need to copy the project’s token, which you can find in the project’s settings panel. Click the gear icon in the bottom left (figure 7) and select the Management tab at the top of the settings panel that appears. This will show you a list of project settings including the token of the project, a long alphanumeric string (figure 8). Copy this string to the clipboard.
Figure 8
Step 4: Mixpanel Setup
Mixpanel needs to be initialized each time the application launches. The designated place to do this is in the application delegate’s application:didFinishLaunchingWithOptions: method. Initializing Mixpanel is as easy as calling the sharedInstanceWithToken: class method on the Mixpanel class and passing the project token we copied from the project’s settings panel a moment ago. Mixpanel is now aware of the project you just created in the Mixpanel dashboard.
As stated earlier in this tutorial, it is important to identify each user as a separate individual. Mixpanel does this for you automatically by using a hash of the MAC address of the device.
You can fetch this unique identifier by asking the Mixpanel singleton for its distinctId. The setter method of the distinctId property is named identify: (not setDistinctId:). It is possible to set the distinctId property with a unique identifier that you provide. This is important to do because if you store your customers in a remote database, then I recommend that you synchronize that remote database with Mixpanel’s database by setting the unique identifier of the customer accordingly.
In addition, you may want to generate your own unique identifier for practical reasons. The hash of the MAC address used by Mixpanel is tied to the device. In other words, if one user has your application installed on multiple devices (e.g., universal applications), then that user will show up as multiple separate users. I usually solve this issue by generating my own unique identifier and storing that identifier in iCloud. Not only does it solve the above issue, it also makes sure that the customer persists across installs. How neat is that?
In the example below, I demonstrate how to generate your own unique identifier and store it in the user defaults database. After generating the unique identifier, we set the distinctId property of the Mixpanel singleton object by sending it a message of identify: and pass the unique identifier. Take a look at the code snippet below for clarification. I have also created a helper method, setupMixpanel, to keep everything organized. Note that I make use of the new NSUUID class that makes generating a unique identifier simple.
Tracking events makes Mixpanel interesting. Start by opening MTViewController.xib and add two buttons to the view controller’s view. Give one button the title Milestone 1 and the other button the title Milestone 2 (figure 9). Imagine that each button represents a feature or function in your application that you are interested in.
Figure 9
In the view controller’s implementation file, add an action for each button and in Interface Builder connect each action with the corresponding button. For simplicity reasons, I have named the actions reachedMilestone1: and reachedMilestone2:respectively.
To track an event in Mixpanel, send the shared Mixpanel object a message of track: and pass it a string, that is, the name of the event that you would like to track. Take a look at the implementation of the reachedMilestone1: action below.
In addition to tracking an event, it is also possible to associate parameters or properties with an event by using the track:properties: method. The second argument of track:properties: is a dictionary containing the properties you want to link to the event. Keep in mind that only valid property list classes (NSString, NSDate, NSNumber, etc.) can be stored in the properties dictionary. Take a look at the example below for clarification.
Mixpanel sends a number of parameters by default, such as the device model (iPad, iPhone, etc.), the operating system version, the application version, etc. In some cases, you may want to send a fixed set of properties with every event that Mixpanel tracks. To make this easier, you can register so-called super properties. Super properties are sent with every event that Mixpanel tracks, like the default properties I mentioned above. In the example below, I have registered a dictionary of super properties in the setupMixpanel helper method.
Super properties can be cleared by sending the Mixpanel object a message of clearSuperProperties. There are several other methods related to setting and reading super properties, which you can find in the Mixpanel documentation.
Step 7: Mixpanel Dashboard
Trends
You can build and run your application in the iOS Simulator or on a device to test what we have created so far. Mixpanel collects data whenever a user triggers one of the events that we defined earlier. In the background, Mixpanel sends the collected data to its servers whenever possible. Even though Mixpanel does not provide a live view like Google Analytics does, the data usually shows up in a matter of minutes (figure 10).
Figure 10
The Trends view gives you a summary of how your application is being used by your customers based on the actions they take. The Trends view is great for studying user engagement. Mixpanel conveniently compares user engagement over time, which gives you an accurate sense of how your application is being used as well as the state and growth of your customer base. Keep in mind that by default the Trends view shows the total number of events tracked. Depending on the event, it can be more useful to only show unique events.
Segmentation
The Segmentation view is an incredibly powerful aspect of the Mixpanel dashboard. It lets you segment customers based on an event parameter or property. In the example below (figure 11), I have segmented customers based on device model, a property that Mixpanel aggregates by default. With Segmentation, the possibilities are virtually endless. For example, the segmentation view tells you what percentage of your customer base is running the latest version of your application. It also tells you what version of iOS your customers are using, which is incredibly helpful if you are thinking about abandoning support for an older iOS version. Such decisions are always difficult to make, but with a tool like Mixpanel you at least have an accurate view of the impact it will have on your customers.
Figure 11
Conclusion
Mixpanel is one of the top analytics tools that I work with. Mixpanel’s success is partly due to the fact that a majority of the data processing is done for you. Data is presented in such a way that makes it easy to pick up on trends and patterns.
Remember that Mixpanel has much more to offer than what I have covered in this tutorial. For example, Mixpanel’s push notification integration is a powerful tool to keep customers engaged. The App Store has become incredibly crowded and nowadays customers should not be taken for granted. It is important to do whatever you can to keep your customers engaged. Mixpanel can help you with this.
With Twitter integration, users can share app content on their timeline. For example, on multimedia apps a user can tweet the song he is listening to, or if the app is a game, a new unlocked achievement can be tweeted. Integrating Twitter to your app will help it stand out and enable users to promote it.
Step 1: Visual Studio Project Creation
To begin, we need to create a new project on Visual Studio. For this tutorial we need a simple app, so select the “Windows Phone App” option:
If you are using Visual Studio 2012 with the new WP8 SDK, you will be asked about the Target Windows Phone OS Version. If that’s the case, select the 7.1 OS.
Step 2: Building the UI
Now that the project is created, open the “MainPage.xaml” file, if it isn’t already open, and change the default application and page name text box:
Now in the ContentPanel Grid add two rows, one for a TextBox where the user will input the new status, and the other for the button to submit the status:
In order to connect to Twitter, you will first need a developer account. Go to the Twitter developer homepage and login with your Twitter account, or create one if you don’t have one already.
Step 4: Registering the New App
Once you are logged in, go to the “My Applications” page, and then click on the “Create a new application” button. On the following page fill in the Application Details, and if you already have a web site, input your site at the Website and Callback URL fields. Otherwise, use a placeholder like “http://www.google.com”. After this step a new page will appear giving you two tokens, the “Access token” and the “Access token secret”. Copy these codes and add them as constant strings on top of your “MainPage.xaml.cs” constructor:
Twitter has a complete API that allows you to connect your app to the service in several ways. It is clear and easy to follow, so it is a great add-on to any app. Note that the authentication API is built using OAuth, which makes it very safe, but gives developers trouble connecting to the API. The steps to connect to the API are explained on the OAuth Documentation of the API. There are different ways to connect, but in this tutorial we are going to use the 3 legged authorization. This method asks for a Request Token, then takes the user to a login page and collects the AccessToken. This process can be a little bit complicated, especially if you are trying to add just one or two features of the API. Fortunately, there is a library developed by Daniel Crenna called Tweetsharp. Tweetsharp is a great tool that will simplify communication between your WP7 Apps and Twitter. It is very simple to use and gives you access to the entire Twitter API from just one library:
TweetSharp is a Twitter API library that simplifies the task of adding Twitter to your desktop, web, and mobile applications. You can build simple widgets or complex application suites using TweetSharp.
You can find more information about the project by going to their website and looking through the hosted example projects.
Step 6: Downloading Tweetsharp
The library is only available through NuGet, so in case your Visual Studio doesn’t include the NugGet Package manager, you need to download it from the NuGet homepage. In order to download the package, open the Package Manager Console in Visual Studio (Tools>Library Package Manager>Package Manager Console), and enter the following command:Install-Package TweetSharp.
Step 7: Adding Tweetsharp to the Project
Now that we have the library, we can add it to our project. Add a new import on the “MainPage.xaml.cs” file:
using Tweetsharp
Step 8:Adding a Browser
In order to connect an app to a user’s Twitter account, we must first be given access and permission to the Twitter account. This is done through Twitter’s webpage. Therefore, we need to add a web browser. The browser should cover most of the page, so initially it will be collapsed, and then change to visible only when the user needs to login. In the “MainPage.xaml” file add a new WebBrowser just below the ContentPanel:
Now that we have added Tweetsharp and the web browser, we can continue and connect our app to Twitter. The connection is done through a TwitterService object. Therefore we need to create a private global variable and initialize it on the constructor:
private TwitterService client;
// Constructor
public MainPage()
{
InitializeComponent();
client = new TwitterService(consumerKey, consumerSecret);
}
Step 10: Adding the Click Event
The first time that a user clicks on your “Tweet” button, you must send him to the Twitter login page so he can give you the necessary permission for your app. To do this, ask for a RequestToken. Once you have the token, go to the login page. First, you need to add the click event on your Click button:
private void tweetClick(object sender, RoutedEventArgs e)
{
// Ask for the token
}
Before we can add the code for the token we need two things, a boolean variable telling us if the user is already logged in, and a variable that will save the RequestToken. Let’s add this to the code above the constructor:
With the variables ready, we can go and create the method for processing our RequestedToken. This will check for errors. If everything was done correctly, then save the token and take the user to the login URL from the RequestToken:
Now add the code to request the Token inside the Click event method:
//If user is already logged in, just send the tweet, otherwise get the RequestToken
if (userAuthenticated)
//send the Tweet, this is just a placeholder, we will add the actual code later
Dispatcher.BeginInvoke(() => { MessageBox.Show("Placeholder for tweet sending"); });
else
client.GetRequestToken(processRequestToken);
Step 12: Adding Navigated Event
After the user logs in and accepts our app, Twitter will take us to a URL containing a verifier code that we need in order to request the AccessToken. Let’s add this event method to our browser
In order to retrieve the verifier code from the URL we need a parser, which in this case is a method that is on the Hammock extensions library. Copy this code and add it to your project:
// From Hammock.Extensions.StringExtensions.cs
public static IDictionary<string, string> ParseQueryString(string query)
{
// [DC]: This method does not URL decode, and cannot handle decoded input
if (query.StartsWith("?")) query = query.Substring(1);
if (query.Equals(string.Empty))
{
return new Dictionary<string, string>();
}
var parts = query.Split(new[] { '&' });
return parts.Select(
part => part.Split(new[] { '=' })).ToDictionary(
pair => pair[0], pair => pair[1]
);
}
With this method we can go and get the verifier code on the browserNavigated event method:
Just like with the RequestToken, we have to create a method that handles the result of the AccessToken request. Once we receive the result we must check for errors. If the request was done successfully, we then authenticate the user, and send the Tweet:
private void processAccessToken(OAuthAccessToken token, TwitterResponse response)
{
if (token == null)
Dispatcher.BeginInvoke(() => { MessageBox.Show("Error obtaining Access token"); });
else
{
client.AuthenticateWith(token.Token, token.TokenSecret);
userAuthenticated = true;
//Send the Tweet, we will add this code later
}
}
With this completed, go to the browserNavigated method and change the getTheAccessToken comment with the following line:
When we send a Tweet we want to know if it was successfully sent. That’s why we need another method to handle a Tweet. Here’s the code that we need to add:
Right now your app should be completely functional, so go and test it. Enter any message click on the “Tweet” button and the following screen should appear.
After that, a message saying “Tweet posted successfully” should appear:
If you go to the Twitter account, you should also be able to see the Tweet you just sent:
Congratulations! You now have an app that can connect to Twitter! But we haven’t finished yet. There are some areas we can improve.
Step 16: Saving the AccessToken
Every time a user opens your app, he will have to go through the Twitter login page. This is something users don’t like. They want to register once and be able to Tweet without trouble. This problem is easy to solve. We need to save the AccessToken that we obtained the first time the user logs in. Once that is completed, it’s saved on IsolatedStorage and will always be accessible. This can be done by using the following method:
With the token already on IsolatedStorage, we need a method to retrieve it. Go ahead and add the following method:
private OAuthAccessToken getAccessToken()
{
if (IsolatedStorageSettings.ApplicationSettings.Contains("accessToken"))
return IsolatedStorageSettings.ApplicationSettings["accessToken"] as OAuthAccessToken;
else
return null;
}
This function should be called from the constructor because we want to be logged in from the very beginning:
// Constructor
public MainPage()
{
InitializeComponent();
client = new TwitterService(consumerKey, consumerSecret);
//Chek if we already have the Autehntification data
var token = getAccessToken();
if (token != null)
{
client.AuthenticateWith(token.Token, token.TokenSecret);
userAuthenticated = true;
}
}
Step 18: Checking Expired Tokens
Additionally take into account that the user may reject the permission of our app, so we need to detect this and ask for permission again. This detection should be done on our tweetResponse method, since that’s where Twitter notifies you of any problem with your post. Change the code from tweetResponse to the following:
One last feature to add to your app is to allow the user to close the browser if he wants to. Right now if the browser appears, the only way to close it is by logging in or with an error. You can give the user this option by using the back button:
This tutorial is a short explanation of what you can do with Tweetsharp and Twitter. If you are interested in increasing the functionality of your app, like getting mentions, retweets, direct messages, and several other features, go to Tweetsharp’s website and you will find everything that you need to start developing a great app. I hope you enjoyed this tutorial and that it will be useful for your future projects.
Multi-tasking prevents apps from freezing. In most programming languages, achieving this is a bit tricky, but the NSOperationQueue class in iOS makes it easy!
This tutorial will demonstrate how to use the NSOperationQueue class. An NSOperationQueue object is a queue that handles objects of the NSOperation class type. An NSOperation object, simply phrased, represents a single task, including both the data and the code related to the task. The NSOperationQueue handles and manages the execution of all the NSOperation objects (the tasks) that have been added to it. The execution takes place with the main thread of the application. When an NSOperation object is added to the queue it is executed immediately and it does not leave the queue until it is finished. A task can be cancelled, but it is not removed from the queue until it is finished. The NSOperation class is an abstract one so it cannot be used directly in the program. Instead, there are two provided subclasses, the NSInvocationOperation class and the NSBlockOperation class. I’ll use the first one in this tutorial.
The Sample Project
Here’s the goal for this tutorial: for each extra thread we want our application to create an NSInvocationOperation (NSOperation) object. We’ll add each object into the NSOperationQueue and then we’re finished. The queue takes charge of everything and the app works without freezing. To demonstrate clearly the use of the classes I mentioned above, we will create a (simple) sample project in which, apart from the main thread of the app, we will have two more threads running along with it. On the first thread, a loop will run from 1 to 10,000,000 and every 100 steps a label will be updated with the loop’s counter value. On the second thread, a label’s background will fill with a custom color. That process will take place inside a loop and it will be executed more than once. So we will have something like a color rotator. At the same time, the RGB values of the custom background color along with the loop counter’s value will be displayed next to the label. Finally, we will use three buttons to change the view’s background color on the main thread. These tasks could not be executed simultaneously without multi-tasking. Here is a look at the end result:
Step 1: Create the Project
Let’s begin by creating the project. Open the Xcode and create a new Single View Application.
Click on Next and set a name for the project. I named it ThreadingTestApp. You can use the same or any other name you like.
Next. complete the project creation.
Step 2: Setup the Interface
Click on the ViewController.xib file to reveal the Interface Builder. Add the following controls to create an interface like the next image:
UINavigationBar
Frame (x, y, W, H): 0, 0, 320, 44
Tintcolor: Black color
Title: “Simple Multi-Threading Demo”
UILabel
Frame (x, y, W, H): 20, 59, 280, 21
Text: “Counter at Thread #1″
UILabel
Frame (x, y, W, H): 20, 88, 280, 50
Background color: Light gray color
Text color: Dark gray color
Text: -
UILabel
Frame (x, y, W, H): 20, 154, 280, 21
Text: “Random Color Rotator at Thread #2″
UILabel
Frame (x, y, W, H): 20, 183, 100, 80
Background color: Light gray color
Text: -
UILabel
Frame (x, y, W, H): 128, 183, 150, 80
Text: -
UILabel
Frame (x, y, W, H): 20, 374, 280, 21
Text: “Background Color at Main Thread”
UIButton
Frame (x, y, W, H): 20, 403, 73, 37
Title: “Color #1″
UIButton
Frame (x, y, W, H): 124, 403, 73, 37
Title: “Color #2″
UIButton
Frame (x, y, W, H): 228, 403, 73, 37
Title: “Color #3″
For the last UILabel and the three UIButtons, set the Autosizing value to Left – Bottom to make the interface look nice on the iPhone 4/4S and iPhone 5, just like the next image:
Step 3: IBOutlet Properties and IBAction Methods
In this next step we will create the IBOutlet properties and IBAction methods that are necessary to make our sample app work. To create new properties and methods, and connect them to your controls while being the Interface Builder, click on the middle button of the Editor button at the Xcode toolbar to reveal the Assistant Editor:
Not every control needs an outlet property. We will add only one for the UILabels 3, 5, and 6 (according to the order they were listed in step 2), named label1, label2, and label3.
To insert a new outlet property, Control+Click (Right click) on a label > Click on the New Referencing Outlet > Drag and Drop into the Assistant Editor. After that, specify a name for the new property, just like in the following images:
Inserting a new IBOutlet property
Setting the IBOutlet property name
Repeat the process above three times to connect the three UILabels to properties. Inside your ViewController.h file you have these properties declared:
Now add the IBAction methods for the three UIButtons. Each one button will change the background color of the view. To insert a new IBAction method, Control+Click (Right click) on a UIButton > Click on the Touch Up Inside > Drag and Drop into the Assistant Editor. After that specify a name for the new method. Take a look at the following images and the next snippet for the method names:
Inserting a new IBAction method
Setting the IBAction method name
Again, repeat the process above three times to connect every UIButton to an action method. The ViewController.h file should now contain these:
The IBOutlet properties and IBAction methods are now ready. We can now begin coding.
Step 4: The NSOperationQueue Object and the Necessary Task-Related Method Declarations
One of the most important tasks we must do is to declare a NSOperationQueue object (our operation queue), which will be used to execute our tasks in secondary threads. Open the ViewController.h file and add the following content right after the @interface header (don’t forget the curly brackets):
Also, each task needs to have at least one method which contains the code that will run simultaneously with the main thread. According to the introductory description, the first task the method will be named counterTask and the second one will be named colorRotatorTask:
-(void)counterTask;
-(void)colorRotatorTask;
That’s all we need. Our ViewController.h file should look like this:
We’re almost finished. We have setup our interface, made all the necessary connections, declared any needed IBAction and other methods, and established our base. Now it is time to build upon them.
Open the ViewController.m file and go to the viewDidLoad method. The most important part of this tutorial is going to take place here. We will create a new NSOperationQueue instance and two NSOperation (NSInvocationOperation) objects. These objects will encapsulate the code of the two methods we previously declared and then they will be executed on their own by the NSOperationQueue. Here is the code:
- (void)viewDidLoad
{
[super viewDidLoad];
// Create a new NSOperationQueue instance.
operationQueue = [NSOperationQueue new];
// Create a new NSOperation object using the NSInvocationOperation subclass.
// Tell it to run the counterTask method.
NSInvocationOperation *operation = [[NSInvocationOperation alloc] initWithTarget:self
selector:@selector(counterTask)
object:nil];
// Add the operation to the queue and let it to be executed.
[operationQueue addOperation:operation];
[operation release];
// The same story as above, just tell here to execute the colorRotatorTask method.
operation = [[NSInvocationOperation alloc] initWithTarget:self
selector:@selector(colorRotatorTask)
object:nil];
[operationQueue addOperation:operation];
[operation release];
}
This whole process is really simple. After creating the NSOperationQueue instance, we create an NSInvocationOperation object (operation). We set its selector method (the code we want executed on a separate thread), and then we add it to the queue. Once it enters the queue it immediately begins to run. After that the operation object can be released, since the queue is responsible for handling it from now on. In this case we create another object and we’ll use it the same way for the second task (colorRotatorTask).
Our next task is to implement the two selector methods. Let’s begin by writing the counterTask method. It will contain a for loop that will run for a large number of iterations and every 100 steps the label1‘s text will be updated with the current iteration’s counter value (i). The code is simple, so here is everything:
-(void)counterTask{
// Make a BIG loop and every 100 steps let it update the label1 UILabel with the counter's value.
for (int i=0; i<10000000; i++) {
if (i % 100 == 0) {
// Notice that we use the performSelectorOnMainThread method here instead of setting the label's value directly.
// We do that to let the main thread to take care of showing the text on the label
// and to avoid display problems due to the loop speed.
[label1 performSelectorOnMainThread:@selector(setText:)
withObject:[NSString stringWithFormat:@"%d", i]
waitUntilDone:YES];
}
}
// When the loop gets finished then just display a message.
[label1 performSelectorOnMainThread:@selector(setText:) withObject:@"Thread #1 has finished." waitUntilDone:NO];
}
Please note that it is recommended as the best practice (even by Apple) to perform any visual updates on the interface using the main thread and not by doing it directly from a secondary thread. Therefore, the use of the performSelectorOnMainThread method is necessary in cases such as this one.
Now let’s implement the colorRotatorTask method:
-(void)colorRotatorTask{
// We need a custom color to work with.
UIColor *customColor;
// Run a loop with 500 iterations.
for (int i=0; i<500; i++) {
// Create three float random numbers with values from 0.0 to 1.0.
float redColorValue = (arc4random() % 100) * 1.0 / 100;
float greenColorValue = (arc4random() % 100) * 1.0 / 100;
float blueColorValue = (arc4random() % 100) * 1.0 / 100;
// Create our custom color. Keep the alpha value to 1.0.
customColor = [UIColor colorWithRed:redColorValue green:greenColorValue blue:blueColorValue alpha:1.0];
// Change the label2 UILabel's background color.
[label2 performSelectorOnMainThread:@selector(setBackgroundColor:) withObject:customColor waitUntilDone:YES];
// Set the r, g, b and iteration number values on label3.
[label3 performSelectorOnMainThread:@selector(setText:)
withObject:[NSString stringWithFormat:@"Red: %.2f\nGreen: %.2f\nBlue: %.2f\Iteration #: %d", redColorValue, greenColorValue, blueColorValue, i]
waitUntilDone:YES];
// Put the thread to sleep for a while to let us see the color rotation easily.
[NSThread sleepForTimeInterval:0.4];
}
// Show a message when the loop is over.
[label3 performSelectorOnMainThread:@selector(setText:) withObject:@"Thread #2 has finished." waitUntilDone:NO];
}
You can see that we used the performSelectorOnMainThread method here as well. The next step is the [NSThread sleepForTimeInterval:0.4]; command, which is used to cause some brief delay (0.4 seconds) in each loop execution. Even though it is not necessary to use this method, it is preferable to use it here to slow down the background color’s changing speed of the label2 UILabel (our color rotator). Additionally in each loop we create random values for the red, green, and blue. We then set these values to produce a custom color and set it as a background color in the label2 UILabel.
At this point the two tasks that are going to be executed at the same time with the main thread are ready. Let’s implement the three (really easy) IBAction methods and then we are ready to go. As I have already mentioned, the three UIButtons will change the view’s background color, with the ultimate goal to demonstrate how the main thread can run alongside the other two tasks. Here they are:
That’s it! Now you can run the application and see how three different tasks can take place at the same time. Remember that when the execution of NSOperation objects is over, it will automatically leave the queue.
Conclusion
Many of you may have already discovered that the actual code to run a multi-tasking app only requires a few lines of code. It seems that the greatest workload is implementing the required methods that work with each task. Nevertheless, this method is an easy way to develop multi-threading apps in iOS.
In this tutorial series, you’ll learn how to create an unblock puzzle game. The objective of the game is to clear the path for the square to get out. Read on!
Step 1: Application Overview
Using pre-created graphics we will code an entertaining game using Lua and the Corona SDK API’s.
The player will be able to tap on the screen in order to destroy the big bubbles, you can modify the parameters in the code to customize the game.
Step 2: Target Device
The first thing we have to do is select the platform we want to run our app within, that way we’ll be able to choose the size for the images we will use.
The iOS platform has these characteristics:
iPad 1/2/Mini: 1024x768px, 132 ppi
iPad Retina: 2048×1536, 264 ppi
iPhone/iPod Touch: 320x480px, 163 ppi
iPhone/iPod Retina: 960x640px, 326 ppi
iPhone 5/iPod Touch: 1136×640, 326 ppi
Because Android is an open platform, there are many different devices and resolutions. A few of the more common screen characteristics are:
Asus Nexus 7 Tablet: 800x1280px, 216 ppi
Motorola Droid X: 854x480px, 228 ppi
Samsung Galaxy SIII: 720x1280px, 306 ppi
In this tutorial we’ll be focusing on the iOS platform with the graphic design, specifically developing for distribution to an iPhone/iPod touch, but the code presented here should apply to Android development with the Corona SDK as well.
Step 3: Interface
A simple and friendly interface will be used. It will involve multiple shapes, buttons, bitmaps and more.
The interface graphic resources necessary for this tutorial can be found in the attached download.
Step 4: Export Graphics
Depending on the device you have selected, you may need to export the graphics in the recommended ppi. You can do that in your favorite image editor.
< p>I used the Adjust Size… function in the Preview app on Mac OS X.
Remember to give the images a descriptive name and save them in your project folder.
Step 5: App Configuration
An external file will be used to make the application go fullscreen across devices, the config.lua file. This file shows the original screen size and the method used to scale that content in case the app is run in a different screen resolution.
Open your prefered Lua editor (any Text Editor will work, but you won’t have syntax highlighting) and prepare to write your awesome app. Remember to save the file as main.lua in your project folder.
Step 7: Code Structure
We’ll structure our code as if it were a Class. If you know ActionScript or Java, you should find the structure familiar.
Necesary Classes
Variables and Constants
Declare Functions
contructor (Main function)
class methods (other functions)
call Main function
Step 8: Hide the Status Bar
display.setStatusBar(display.HiddenStatusBar)
This code hides the status bar. The status bar is the bar on top of the device screen that shows the time, signal, and other indicators.
Step 9: Background
A simple graphic is used as the background for the application interface. The next line of code stores it.
-- Graphics
-- [Background]
local bg = display.newImage('bg.png')
Step 10: Title View
This is the Title View, it will be the first interactive screen to appear in our game. These variables store its components:
-- [Title View]
local titleBg
local playBtn
local creditsBtn
local titleView
Step 11: Credits View
This view will show the credits and copyright of the game. These variables will be used to store them:
-- [CreditsView]
local creditsView
Step 12: Game Background
This image will be placed on top of our previous background. This will be the game background.
-- Game Background
local gameBg
Step 13: Blocks
The next variables will store the different blocks in the stage.
-- Blocks
local hblocks
local vblocks
local s
Step 14: Movements TextField
The movement textfield value is handled by this variable.
-- Movements TextField
local movements
Step 15: Alert
This is the alert that will be displayed when you win the game. It will complete the level and end the game.
-- Alert
local alertView
Step 16: Variables
This are the variables we’ll use. Read the comments in the code to know more about them.
-- Variables
local lastY --Used to reposition the credits view
local dir --stores the dragging direction
-- Level Table:
-- 1 = vertical block
-- 2 = horizontal block
-- 3 = square
local l1 = {{0, 0, 0, 0, 2, 0},
{0, 0, 0, 0, 0, 0},
{0, 0, 0, 3, 0, 1},
{0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0},}
Step 17: Declare Functions
Declare all functions as local at the start.
-- Functions
local Main = {}
local startButtonListeners = {}
local showCredits = {}
local hideCredits = {}
local showGameView = {}
local gameListeners = {}
local dragV = {}
local dragH = {}
local hitTestObjects = {}
local update = {}
local alert = {}
Step 18: Constructor
Next we’ll create the function that will initialize all the game logic:
function Main()
-- code...
end
Step 19: Add Title View
Now we place the TitleView in the stage and call a function that will add the tap listeners to the buttons.
In this part of the series you’ve learned the interface and the basic setup of the game. In the next and final part of the series, we’ll handle the level creation, collision detection, and the final steps to take prior to release like app testing, creating a start screen, adding an icon and, finally, building the app. Stay tuned for the final part!
With the release of iOS 4, the Core Location framework received a significant update by providing support for geofencing. Not only can an application be notified when the device enters or exits a geofence, the operating system also notifies when the application is in the background. This addition to the Core Location framework fits neatly in Apple’s endeavor to support multitasking on iOS. This opens up a number of possibilities, such as performing tasks in the background through geofencing and that is exactly what this tutorial is about.
What is geofencing?
Geofence
A geofence is nothing more than an virtual boundary that corresponds to an actual location in space. In iOS, a geofence is represented by an instance of the CLRegion class, which is characterized by a coordinate (latitude and longitude), a radius, and a unique identifier. This means that geofences on iOS are by definition circular.
Geofencing
Geofencing is the process in which one or more geofences are monitored and action is taken when a particular geofence is entered or exited. Geofencing is a technology very well suited for mobile devices. A good example is Apple’s Reminders application. You can attach a geofence to a reminder so that the application notifies you when you are in the vicinity of a particular location (figure 1). This is incredibly powerful and much more intuitive that using traditional reminders that remind you at a certain date or time.
On the iOS platform, geofencing can be implemented in several ways. The simplest implementation is to use the Core Location framework to monitor one or more geofences. Whenever the user’s device enters or exits one of these geofences, the application is notified by the operating system. This also works if the application is not in the foreground, which is crucial if you want to properly implement geofencing in an application.
A more complex implementation of geofencing involves a remote server that tracks the location of the user’s device and sends push notifications to the user’s device if it enters or exits a geofence. This approach is much more involved and comes with significant overhead. One reason for opting for this strategy is tied to the limitations of the Core Location framework. Let me explain what I mean by that.
Geofencing on iOS
As I told you earlier, the Core Location framework is responsible for geofencing on iOS. Location services are closely tied to multitasking and geofencing is no exception. By enabling geofencing in the background, it appears as if your application continues to run in the background. However, the operating system goes one step further by monitoring regions of interest even if the application is inactive, that is, no instance of the application is running in the background or foreground. The mechanism is not that complicated. The operating system manages a list of geofences and which applications are interested in which geofences. If a particular geofence is entered or exited, the operating system notifies the corresponding application. The primary reason of this approach is to save battery power. Instead of allowing several applications running in the background and using the device’s location services, the operating system manages this task and only notifies an application when necessary.
There are a number of downsides to how geofencing works on iOS. For example, an application can monitor no more than twenty regions or geofences. This might be sufficient for most applications, but you can imagine that for some applications this upper limit is unacceptable.
In the remainder of this tutorial, I will show you how to implement geofencing in an iOS application. At the end of the tutorial, I will make some suggestions as to how you can leverage geofencing to perform tasks in the background even if your application is not running.
Step 1: Project Setup
Create a new project in Xcode by selecting the Single View Application template from the list of templates (figure 2). Name your application Geofencing, enter a company identifier, set iPhone for the device family, and check Use Storyboards and Use Automatic Reference Counting. We won’t be using unit tests in this project (figure 3). Tell Xcode where you want to save the project and click Create.
Because the Core Location framework will do the heavy lifting for us, we need to link our project against it. Select the project in the Project Navigator and choose the Geofencing target from the list of targets. Choose the Build Phases tab at the top and open the Link Binary With Libraries drawer. Click the button with the plus sign and choose CoreLocation.framework form the list of libraries and frameworks (figure 4).
Step 2: User Interface
Before we create the user interface, we need to make one change to the MTViewController class. Open the class header file and change the superclass to UITableViewController.
By using a storyboard, creating the user interface is quick and easy. Open MainStoryboard.storyboard, select the view controller, and delete it from the storyboard. Drag a table view controller from the Object Library on the right and set its class to MTViewController in the Identity Inspector. With the view controller still selected, open the Editor menu and choose Embed In > Navigation Controller (figure 5).
Select the prototype cell of the table view controller and give it an identifier of GeofenceCell and set the cell’s style to Subtitle (figure 6).
The application will need the ability to add and remove geofences. Let’s start by creating two buttons. In the view controller’s viewDidLoad method, we call setupView, a helper method in which we further configure the user interface. In setupView, we create a button to add a geofence based on the current location and a button to edit the table view.
The CLLocationManager class is the star player of this tutorial. The location manager will provide us with the coordinates of the device’s current location and it will also enable us to work with geofences. Start by adding an import statement to the view controller’s header file to import the framework’s header files.
We also need to create two private properties. The first property stores a reference to the location manager, an instance of the CLLocationManager class. To receive updates from the location manager, the MTViewController class is required to conform to the CLLocationManagerDelegate protocol. I will talk more about this in a few moments. The second property is of type NSArray and serves as the data source of the table view. It will store the geofences, instances of CLRegion, that the location manager monitors. For practical reasons, I have also declared a helper variable (BOOL) named _didStartMonitoringRegion. Its purpose will become clear a bit later in this tutorial.
In the view controller’s awakeFromNib method, we initialize and configure the location manager as shown below. The awakeFromNib method is sent to the view controller after each object in the nib file is loaded and ready to use.
To populate the table view, we need the geofences that the operating system is monitoring for us. The location manager keeps a reference to these geofences, which means that there is no need to store the geofences ourselves. Amend the awakeFromNib method as shown below. Because the monitoredRegions property is a set, it is necessary to store a reference of the location manager’s monitoredRegions property in the view controller’s geofences property.
The implementation of the table view data source protocol holds no surprises (see below). As I mentioned earlier, the geofences property contains instances of CLRegion. In each table view cell, we display the location of each region along with the region’s unique identifier. We take a closer look at the tableView:commitEditingStyle:forRowAtIndexPath: method later in this tutorial.
To add a geofence, we need to implement the addCurrentLocation: action. Its implementation is pretty simple as you can see. In the addCurrentLocation: method, we ask the location manager instance to start generating updates for the device’s current location. We also set the helper instance variable _didStartMonitoringRegion to NO. I will explain the reason for this in a moment.
Whenever the location manager has location updates available, it invokes the locationManager:didUpdateLocations: delegate method as shown below. In this delegate method, we first check if _didStartMonitoringRegion is set to NO. This is important, because it can happen that location updates are sent so quickly one after the other that multiple geofences are created for one location. We create the region or geofence (CLRegion) based on the first location in the array of locations gives to us by the location manager. The radius of the region is set to 250.0 (in meters), but feel free to modify this to your needs.
The identifier of the CLRegion instance ensures that we can distinguish one monitored region from another. If we were to monitor a new region with the same identifier, the old region would be replaced with the new region. To start monitoring the new region, we send the location manager a message of startMonitoringForRegion: and pass the new region as an argument. With the new region being monitored, we also stop updating the current location.
To update the table view, we first update its data source and then insert a row at the bottom of the table view. The view is also updated by calling another helper method, updateView. This method performs several checks and updates the view controller’s view accordingly.
Whenever the user’s device enters and exits a monitored region, the location manager’s delegate is sent a message of locationManager:didEnterRegion: and locationManager:didExitRegion:, respectively. Implement both delegate methods as shown below.
Before running the application in the iOS Simulator, we need to modify the active scheme. Click Geofencing on the immediate right of the Stop button (figure 7) and select Edit Scheme… from the menu. On the left, select the tab labeled Run Geofencing.app and choose the Options tab at the top (figure 8). Check the checkbox labeled Allow Location Simulation and set the default location to New York, NY, USA (figure 8). This will make it easier to test our application.
Build and run the application in the iOS Simulator and add a region to monitor by clicking the add button in the top left. You can test the geofence by changing the simulated location of the iOS Simulator (figure 9).
Step 6: Editing Geofences
Editing the geofences is quite easy. Start by implementing the editTableView: action as shown below. All we do is toggle the editing mode of the table view and update the title of the edit button.
To remove a geofence from the table view (and stop monitoring the region), we implement the tableView:commitEditingStyle:forRowAtIndexPath: as shown below. Its implementation is not complicated as you can see. We fetch the corresponding region from the data source, tell the location manager to stop monitoring that region, update the table view and its data source, and update the view. Build and run the application once more to try it out.
Developers have tried to bypass the limitations of the iOS operating system for many years. One of the hacks that some applications use to update an application in the background is geofencing. This is an obvious strategy for applications, such as Foursquare or TomTom. However, Apple seems to allow other applications to use geofencing as well. Instapaper, for example, implemented this strategy some time ago. By defining one or more geofences, Instapaper starts updating your list of saved articles in the background whenever you enter one of those geofences. This is a clever strategy than can be useful in a number of use cases.
Status Bar
Whenever an application is actively using the device’s location services, the location services icon is shown in the status bar. The same is true for geofencing. The difference is the icon as shown in figure 10. Even if no application is running in the foreground, the outlined location services icon is visible as long as any application on the device is making use of geofencing.
Things To Keep in Mind
I have only covered the absolute fundamentals of geofencing in this tutorial. It is important to keep a few things in mind before implementing geofencing in any application. The most obvious aspect is privacy. A user can disable location services for any application, which means that geofencing is not guaranteed to work in every use case. It is important to perform three checks when working with Location Services, (1) are location services enabled, (2) has the user given your application permission to use the device’s location services, and (3) does the device support geofencing.
The first question is answered by sending the CLLocationManager class a message of locationServicesEnabled. This tells the application whether the device has location services enabled. To verify if your application is allowed to make use of the device’s location services, you send the CLLocationManager a message of locationServicesEnabled. Finally, to check whether geofencing is available on the device, invoke CLLocationManager‘s class method regionMonitoringAvailable. Not all devices and models support geofencing.
Conclusion
Implementing geofencing in an iOS application isn’t that difficult as you can see. A basic understanding of the Core Location framework is all you need to get started. The Core Location framework is a powerful framework and it has a lot more neat features that are often overlooked or underused.
BlackBerry 10 is a completely new and unique operating system based on the QNX kernel. This article will provide a quick look at this new platform from a mobile developer’s perspective.
BlackBerry 10 Features
BlackBerry Flow:
BlackBerry has termed their User Interface ‘Flow’. The interface speaks for itself with its efficiency and quick access to applications. With BlackBerry Flow, users can easily switch between applications without closing any of them. However, the number of applications is limited – only eight applications can be open simultaneously. Still, it is decent enough for multi-tasking.
BlackBerry Hub:
This feature helps you organize all your messages, calls, social profiles, email accounts, etc. This feature can manage multiple accounts without opening an individual application for each one. It runs in the background continuously and will not miss any information.
BlackBerry Peek:
By using this feature, users can have a quick glimpse at the notifications they receive without closing or exiting the application they are currently using. It won’t take more than a second for the user to swipe across the screen and all their notifications will be listed. For example, as demonstrated at the launch event, the user could view their emails without disturbing the YouTube video that was playing.
BlackBerry Keyboard:
BlackBerry has always been the ‘King of Keyboards’. With this new operating system BlackBerry has lived up to its name and expectations. The touch keyboard in Z10 has a great built-in keyboard and is expected to dominate over the Android and iOS keyboards. Some other features include accurate auto correct and outstanding flicking word gestures. In addition to all the fantastic features, the BlackBerry 10 keyboard user can type in English, Spanish, and French concurrently.
BlackBerry Balance:
Do you experience difficulty balancing your business profile from your personal one? BlackBerry has a solution for that called ‘BlackBerry Balance’. This is a useful feature for companies because if any of their employees use a personal BlackBerry handset for work, and due to some circumstance are leaving the company, the administrator can sweep out all the information that has been stored in the employee’s business profile. Sadly, this feature is limited to BlackBerry Enterprise Service users only.
BlackBerry Messenger:
BlackBerry Messenger has been revamped for the new operating system. Besides text chats between friends, users can video chat and share screens. BlackBerry claims that this feature is useful for those who want to share documents, PowerPoint slides, and spread-sheets.
Why Develop BlackBerry Apps?
The release of the Blackberry OS 10 sparked a wave of discussion about its future and BlackBerry OS applications. Blackberry also announced the BlackBerry World Catalogue, which had seventy thousand apps on the day of the OS release. That is clearly impressive. So, can we expect BlackBerry to steal the stage from giants like Android and iOS? Let’s consider this possibility. In this article, I’ll be talking more on the developer side, about how developing BlackBerry apps may or may not be profitable for mobile app developers.
When it comes to apps, Android Play has stolen the show. Conversion, or rather ‘re-packaging’ of Android apps for use as BlackBerry apps, is incredibly simple with Android Player. Companies usually have to focus on the development of their apps on several different platforms. They don’t mind spending to develop Android or iOS apps because it constitutes 95% of their user base. What really matters is their investment on the development of apps that cater to the needs of the remaining 5% of its customers. Microsoft’s Windows Phone Platform already competes for this share and we can even expect this margin to increase in the future. But that doesn’t solve the overriding issue, because developing apps for different platforms requires different languages – Java for Android, Objective-C for iOS, C# for Windows Phone, and C/C++ for BlackBerry 10 apps. So developers need to re-engineer the whole code base. Developers can’t just cross-compile their source code to develop apps on a different platform. This is the basic cross-platform compatibility issue for mobile applications. This is where Android Player helps. If someone already has an Android app, it can easily be re-packaged to work on the BlackBerry OS 10 framework, without having to re-program it. This means that companies can save money and time that otherwise would have been spent on developing an all new application. Thus investing in an Android app essentially means getting a BlackBerry app for free!
Developing BlackBerry 10 Apps
The following framework and languages can be used to develop BlackBerry apps:
C/C++ Native SDK: This is the most basic native app framework. Installation goes on in the following manner:
Get code signing keys and debug tokens for distributing apps. These include the RIM Developer Key (RDK) and the debug token (PBDT) file to generate debug tokens.
Install the Native SDK (support for Windows, Mac OS, and Linux). The BlackBerry 10 Native SDK includes all of the tools – the QNX Momentics IDE, a compiler, a linker, libraries, and command-line tools – that is necessary to start developing applications for BlackBerry 10 devices.
Install the Blackberry Device simulator because it helps test apps even without a BlackBerry 10 device.
Set up the SDK and connect it with a device/simulator and upload the debug tokens to start developing. Using the QNX Momentics IDE, which is the primary development tool for all native applications, is recommended.
Start the development by running the BlackBerry Deployment Setup Wizard on your system.
Java Android Runtime: This allows you to port your pre-developed Android/iOS apps to run perfectly on the Blackberry OS 10. You can use the BlackBerry Runtime for Android apps to run Android 2.3.3+ platform applications on the BlackBerry Tablet OS and BlackBerry 10. To use the runtime, first repackage your Android applications to BAR file format, which is the compatible file format required for an application to run on the BlackBerry Tablet OS and BlackBerry 10.
One of the following three tools can be used to repackage applications to BAR file format and also to check how compatible applications are to run on the BlackBerry Tablet OS or BlackBerry 10. Some of the APIs in the Android SDK may not be supported, or only partially supported by the BlackBerry Runtime for Android apps.
The following flow chart shows a few steps necessary to repackage Android applications to the BAR file format that the BlackBerry PlayBook OS and the BlackBerry 10 use. Then, the app is ready to be published on the BlackBerry World Catalogue.
Many companies who have their app on the Blackberry 10 platform have given positive reviews. This platform has enabled them to develop applications that are enterprise-grade and user-friendly, as well as capable of meeting the needs of their global customers.
Our deep integration with BlackBerry 10 connects users directly with all of their business content on Box, delivering a secure collaboration experience with intuitive features for fast and fluid file sharing. Box has built native apps for every leading mobile platform and we’re excited to continue our great relationship of innovation with the RIM team. - Karen Appleton, VP of Business Development at Box
If BlackBerry continues to support packaged apps, developers are expected to cling on to the platform.The various platform partners for BlackBerry help developers create apps with ease. The apps can be extensively linked with BlackBerry’s core functionality using multiple frameworks. This will make developing fun for developers.
Mobiletuts+ is currently seeking to hire writers to cover PhoneGap, mobile web development (e.g. JavaScript, HTML, CSS, etc.), iOS SDK development, Android SDK development, Mobile UI/UX, and more! If you’re a talented mobile developer with tips or techniques to share, we want to hear from you!
What We’re Looking For
Rock solid knowledge: Mobiletuts+ strives to be a highly respected source of information. Our authors must have excellent knowledge about their topic and be willing to perform thorough research when necessary.
Superb writing skills: You must be comfortable with the English language. We can proof your content, but we can’t rewrite everything for you. To put it simply, if you struggle when deciding whether to write its or it’s, this might not be the gig for you.
What’s the Pay?
$200 – $250 USD per full-length tutorial (depending on length/quality).
$50 – $150 USD per quick-tip (depending on length/quality).
Negotiable rates for well-known authors. Get in touch!
The exact rate is dependent on your industry experience, communication skills, and tutorial idea.
Excellent! Where Do I Sign Up?
We’ll need the following information from you:
Name.
A brief paragraph about your background and why you’re a good fit.
A single link to an article that you’ve written, helped create, or like.
Two ideas for tutorials that you’d like to write for us.
Optional: E-mails with a tutorial attached will get more attention from us since this means that you’re serious about the gig.
Please e-mail the above information to:
We get a fair number of single line e-mails. Don’t be that person. E-mails without the required information will be discarded.
This is the second installment in our Corona SDK Unblock Puzzle game tutorial. In today’s tutorial, we’ll add to our interface by creating the interactive elements of the unblock game. Read on!
Where We Left Off. . .
Please be sure to check part 1 of the series to fully understand and prepare for this tutorial.
Step 1: Start Button Listeners
This function adds the necesary listeners to the TitleView buttons.
function startButtonListeners(action)
if(action == 'add') then
playBtn:addEventListener('tap', showGameView)
creditsBtn:addEventListener('tap', showCredits)
else
playBtn:removeEventListener('tap', showGameView)
creditsBtn:removeEventListener('tap', showCredits)
end
end
Step 2: Show Credits
The credits screen is shown when the user taps the about button, a tap listener is added to the credits view to remove it.
When the credits screen is tapped, it’ll be tweened out of the stage and removed.
function hideCredits:tap(e)
transition.to(creditsView, {time = 300, y = display.contentHeight + 25, onComplete = function() creditsBtn.isVisible = true playBtn.isVisible = true creditsView:removeEventListener('tap', hideCredits) display.remove(creditsView) creditsView = nil end})
transition.to(titleBg, {time = 300, y = lastY});
end
Step 4: Show Game View
When the Start button is tapped, the title view is tweened and removed revealing the game view. There are many parts involved in this view so we’ll split them in the next steps.
function showGameView:tap(e)
transition.to(titleView, {time = 300, x = -titleView.height, onComplete = function() startButtonListeners('rmv') display.remove(titleView) titleView = nil end})
Step 5: Game Background
This code places the game background image on the stage:
-- Game BG
gameBg = display.newImage('gameBg.png', 10, 10)
Step 6: Movements TextField
Next we add the movements textfield to the stage. This will count the number of moves done by the player.
This part creates the blocks defined in the Level variable using a double for statement.
-- Create Level
hblocks = display.newGroup()
vblocks = display.newGroup()
for i = 1, #l1 do
for j = 1, #l1[1] do
if(l1[i][j] == 1) then
local v = display.newImage('vrect.png', 10 + (j * 50)-50, 120 + (i * 50)-50)
v:addEventListener('touch', dragV)
vblocks:insert(v)
elseif(l1[i][j] == 2) then
local h = display.newImage('hrect.png', 10 + (j * 50)-50, 120 + (i * 50)-50)
h:addEventListener('touch', dragH)
hblocks:insert(h)
elseif(l1[i][j] == 3) then
s = display.newImage('square.png', 10 + (j * 50)-50, 120 + (i * 50)-49)
s:addEventListener('touch', dragH)
end
end
end
gameListeners('add')
end
Step 8: Game Listeners
This function adds the necessary listeners to start the game logic.
function gameListeners(action)
if(action == 'add') then
Runtime:addEventListener('enterFrame', update)
else
Runtime:removeEventListener('enterFrame', update)
end
end
Step 9: Horizontal Drag
The next function handles the horizontal drag of the blocks.
function dragH(e)
e.target.lastX = 0
local currentX = 0
local initX = 0
if(e.phase == 'began') then
e.target.lastX = e.x - e.target.x
initX = e.target.x
movements.text = tostring(tonumber(movements.text) + 1)
elseif(e.phase == 'moved') then
e.target.x = e.x - e.target.lastX
currentX = e.target.x
-- Calculate direction
if(initX < currentX) then
dir = 'hl' --horizontal-left
elseif(initX > currentX) then
dir = 'hr' --horizontal-right
end
end
end
Step 10: Vertical Drag
Now we create the vertical drag function.
function dragV(e)
e.target.lastY = 0
local currentY = 0
local initY = 0
if(e.phase == 'began') then
e.target.lastY = e.y - e.target.y
initY = e.target.y
movements.text = tostring(tonumber(movements.text) + 1)
elseif(e.phase == 'moved') then
e.target.y = e.y - e.target.lastY
currentY = e.target.y
-- Calculate direction
if(initY < currentY) then
dir = 'vu' --Vertical-upwards
elseif(initY > currentY) then
dir = 'vd' --Vertical-downwards
end
end
end
Step 11: Hit Test Function
We’ll use an excellent and useful function for collision detection without physics. You can find the original example and source at the CoronaLabs Code Exchange web site.
function hitTestObjects(obj1, obj2)
local left = obj1.contentBounds.xMin <= obj2.contentBounds.xMin and obj1.contentBounds.xMax >= obj2.contentBounds.xMin
local right = obj1.contentBounds.xMin >= obj2.contentBounds.xMin and obj1.contentBounds.xMin <= obj2.contentBounds.xMax
local up = obj1.contentBounds.yMin <= obj2.contentBounds.yMin and obj1.contentBounds.yMax >= obj2.contentBounds.yMin
local down = obj1.contentBounds.yMin >= obj2.contentBounds.yMin and obj1.contentBounds.yMin <= obj2.contentBounds.yMax
return (left or right) and (up or down)
end
Step 12: Vertical Borders
This code limits the movement by creating virtual borders.
function update(e)
-- Vertical Borders
for i = 1, vblocks.numChildren do
if(vblocks[i].y >= 370) then
vblocks[i].y = 370
elseif(vblocks[i].y <= 170) then
vblocks[i].y = 170
end
Step 13: Collisions
Here we handle the collisions between the vertical and horizontal blocks.
-- Hit Test
if(hitTestObjects(vblocks[i], hblocks[i]) and dir == 'vu') then
vblocks[i].y = hblocks[i].y + 75
elseif(hitTestObjects(vblocks[i], hblocks[i]) and dir == 'vd') then
vblocks[i].y = hblocks[i].y - 75
end
if(hitTestObjects(vblocks[i], hblocks[i]) and dir == 'hr') then
hblocks[i].x = vblocks[i].x + 75
elseif(hitTestObjects(vblocks[i], hblocks[i]) and dir == 'hl') then
hblocks[i].x = vblocks[i].x - 75
end
if(hitTestObjects(s, vblocks[i])) then
s.x = vblocks[i].x - 50
end
end
Step 14: Horizontal Borders
This code limits the movement horizontally by creating virtual borders.
-- Horizontal Borders
for j = 1, hblocks.numChildren do
if(hblocks[j].x >= 260) then
hblocks[j].x = 260
elseif(hblocks[j].x <= 60) then
hblocks[j].x = 60
end
end
-- Square
if(s.x >= 320) then
display.remove(s)
display.remove(vblocks)
display.remove(hblocks)
alert()
elseif(s.x <= 35) then
s.x = 35
end
end
Step 15: Alert
The alert function stops the game, displays a message and removes the active listeners.
function alert()
gameListeners('rmv')
local alertView = display.newImage('alert.png', 80, display.contentHeight * 0.5 - 41)
transition.from(alertView, {time = 300, y = -82})
endend
end
Step 16: Call Main Function
In order to start the game, the Main function needs to be called. With the above code in place, we’ll do that here:
Main()
Step 17: Loading Screen
The Default.png file is an image that will be displayed right when you start the application while the iOS loads the basic data to show the Main Screen. Add this image to your project source folder, it will be automatically added by the Corona compiler.
Step 18: Icon
Using the graphics you created before, you can now create a nice and good looking icon. The icon size for the non-retina iPhone icon is 57x57px, but the retina version is 114x114px and the iTunes store requires a 512x512px version. I suggest creating the 512×512 version first and then scaling down for the other sizes.
It doesn’t need to have the rounded corners or the transparent glare; iTunes and the iPhone will do that for you.
Step 19: Testing in the Simulator
It’s time to do the final test. Open the Corona Simulator, browse to your project folder, and then click open. If everything works as expected, you are ready for the final step!
Step 20: Build
In the Corona Simulator go to File > Build and select your target device. Fill the required data and click build. Wait a few seconds and your app will be ready for device testing and/or submission for distribution!
Conclusion
In this tutorial you learned how to make our interface come to life by allowing the user to interact with the game’s pieces. Experiment with the final result and try to make your own custom version of the game! I hope you liked this tutorial series and found it helpful. Thank you for reading!
A handful of predefined cell styles have been available to developers since iOS 3. They are convenient and very useful for prototyping, but in many situations you really need a custom solution tailored to the needs of the project you are working on. In this tutorial, I will show you how to customize table view cells by using static and prototype cells, and by subclassing UITableViewCell.
Anatomy of a Table View Cell
Even though UITableViewCell directly inherits from UIView, its anatomy is more complex than you might expect. If you plan on subclassing UITableViewCell, it is necessary to understand the anatomy of a table view cell. At its core, a table view cell is nothing more than a view with several subviews, a background and selected background view, a content view, and several other more exotic subviews, such as the accessory view on the right. Let’s take a look at the various subviews.
Content View
As its name implies, the content view contains the cell’s content. Depending on the cell’s style, this can include one or two labels and an image view. As the documentation emphasizes, it is important to add custom subviews to the cell’s content view because it ensures that the cell’s subviews are properly positioned, resized, and animated when the cell’s properties change. In other words, a table view cell expects its contents to be in its content view. The same is true for a collection view cell, which is virtually identical to a table view cell in terms of view hierarchy.
Accessory View
The accessory view of a table view cell can be any type of view. You are probably already familiar with the disclosure and detail disclosure indicator views. An accessory view, however, can also be a button, slider, or stepper control. Take a look at the Settings application on an iOS device to get an idea of what some of the possibilities are. Keep in mind that the space of an accessory view is limited for a standard table view cell. This means that not every view can or should be used as an accessory view.
Editing and Reordering Control
The editing control is another subview of a table view cell that slides into and out of view when the table view’s editing mode changes. When a table view enters into editing mode, the table view’s data source is asked which table view cells are editable by sending it a message of tableView:canEditRowAtIndexPath: for each cell currently visible. Editable table view cells are told to enter into editing mode, which shows the editing control on the left and, if applicable, the reordering control on the right. A table view cell in editing mode hides its accessory view to make room for the reordering control and the delete confirmation button that appears on the right when a row is marked for deletion.
Background Views
The background and selected background views are positioned behind all other subviews of a table view cell. In addition to these background views, the UITableViewCell class also defines a background view (multipleSelectionBackgroundView) that is used for table views supporting multiple selection.
Predefined Styles
In this tutorial, I won’t talk much about the predefined table view cell styles. The main focus of this tutorial is to show you how you can customize table view cells in such a way that is not possible with the predefined table view cell styles. Keep in mind, however, that these predefined cell styles are quite powerful in customizing a table view cell. The UITableViewCell class exposes primary (textLabel) and secondary (detailTextLabel) label as well as the cell’s image view (imageView) on the left. These properties offer direct access to the cell’s subviews, which means that you can directly modify a label’s attributes, such as its font and text color. The same is true for the cell’s image view.
Despite the flexibility offered by the predefined cell styles, it is not recommended to reposition the various subviews. If you are looking for more control in terms of cell layout, then you need to take a different route.
Option 1: Static Cells
A table view populated with static cells is by far the simplest implementation of a table view from a developer’s perspective. As the name implies, a table view with static cells is static, which means that the number of rows and sections is defined at compile time, not runtime. However, it does not mean that the cell’s contents cannot be modified at runtime. Let me show you how all this works.
Create a new project in Xcode by selecting the Single View Application template, name it Static (figure 1), and enable storyboards and Automatic Reference Counting (ARC). At the time of writing, static cells are only available in combination with storyboards.
Static cells can only be used in conjunction with a UITableViewController, which means that we need to change the superclass of the existing view controller to UITableViewController. Before we take a look at the storyboard, add three outlets as shown below. This will enable us to modify the contents of the static cells that we will create in the storyboard.
Open the main storyboard, select the view controller, and delete it. Drag a table view controller from the Object Library and change its class to MTViewController in the Identity Inspector (figure 2). Select the view controller’s table view and set its Content attribute to Static Cells in the Attributes Inspector (figure 3). Without having to implement the UITableViewDataSource protocol, the table view will be laid out as defined in the storyboard.
You can test this by adding several labels to the static cells and connecting the three outlets that we defined in MTViewController.h a few moments ago (figure 4). As I mentioned earlier, the contents of the labels can be set at runtime. Take a look at the implementation of the view controller’s viewDidLoad below.
Build and run the application in the iOS Simulator to see the result. Static cells are quite powerful especially for prototyping applications. They are quite useful when the layout of a table view doesn’t change, such as in an application’s settings or about view. In addition to specifying the number of rows and sections of a table view, you can also insert custom section headers and footers.
Option 2: Prototype Cells
Another great benefit of using storyboards is the prototype cell, which was introduced in conjunction with storyboards in iOS 5. Prototype cells are templates that are dynamically populated. Each prototype cell is identified by a reuse identifier through which the prototype cell can be referenced in code. Let’s take a look at another example.
Create a new Xcode project based on the Single View Application template. Name the project Prototype and enable storyboards and ARC for the project (figure 5). As we did in the previous example, change the subclass of the view controller (MTViewController) to UITableViewController. There is no need to declare outlets in this example.
As we did in the previous example, delete the existing view controller in the main storyboard and drag a table view controller from the Object Library. Change the class of the new table view controller to MTViewController in the Identity Inspector.
As I mentioned earlier, each prototype cell has a reuse identifier through which it can be referenced. Select the prototype cell in the table view and set its Identifier to CellIdentifier as shown in the figure below (figure 6).
As with static cells, you can add subviews to the content view of the prototype cell. There is no need to specify the number of rows and sections as we did in the previous project. When using prototype cells, it is required to implement the table view data source protocol (UITableViewDataSource). You might be wondering how we reference the subviews of the prototype cell’s content view? There are two options. The easiest option is to give each view a tag and ask the cell’s content view for the subview with a particular tag. Let’s see how this works.
Add a label to the prototype cell and set its tag to 10 in the Attributes Inspector (figure 7). In the MTViewController class, we implement the table view data source protocol as shown below. The cell reuse identifier is declared as a static string constant.
Run the application in the iOS Simulator to see the result. Even though it seems easy to work with prototype cells by using tags to identify subviews, it quickly becomes inconvenient when the cell’s layout is more complex. A better approach is to use a UITableViewCell subclass. Create a new Objective-C class, name it MTTableViewCell, and make it a subclass of UITableViewCell. Open the header file of the new class and create an outlet of type UILabel named mainLabel.
Before we can use our subclass, we need to make some changes to the main storyboard. Open the main storyboard, select the prototype cell, and then set its class to MTTableViewCell in the Identity Inspector (figure 8). Open the Connections Inspector and connect the mainLabel outlet with the label that we added to the prototype cell (figure 9).
The changes we made in the storyboard allow us to refactor the tableView:cellForRowAtIndexPath: as shown below. Don’t forget to import the header file of the MTTableViewCell class. I hope you agree that this change makes our code more readable and maintainable.
Run the application in the iOS Simulator. Prototype cells are a wonderful component of storyboards. They make the customization of table view cells incredibly easy with little effort.
Option 3: Subclassing
In the previous example, we created a custom UITableViewCell subclass. However, we didn’t really leverage the power of subclassing. Instead, we relied on the versatility of prototype cells. In the third and last option, I show you how to create a custom UITableViewCell subclass without using prototype cells. There are several strategies for creating a UITableViewCell subclass and the one I will show you in the following example is by no means the only way. With this example, I want to illustrate in what ways subclassing differs from the first two options in which we made use of Interface Builder and storyboards.
Create a new Xcode project based on the Single View Application template, name it Subclass, and enable ARC for the new project. Make sure that the checkbox labeled Use Storyboards is not checked (figure 10).
As we did in the two previous examples, start by changing the view controller’s (MTViewController) superclass to UITableViewController. Open the view controller’s XIB file, delete the view controller’s view, and drag a table view from the Object Library. Select the table view and set its dataSource and delegate outlets to the File’s Owner, that is, the view controller. Select the File’s Owner and set its view outlet to the table view (figure 11).
Before we create a custom subclass of UITableViewCell, let’s first implement the table view data source protocol to make sure that everything works as expected. As we did earlier, it is good practice to declare the cell reuse identifier as a static string constant. To make cell reuse (and initialization) easier, we send the table view a message of registerClass:forCellReuseIdentifier: and pass a class name and the cell reuse identifier as the first and second parameter. This gives the table view all the information it needs to instantiate new cells whenever no reusable cells are available. What does this gain us? It means that we never explicitly have to instantiate a cell. The table view takes care of this for us. All we need to do is ask the table view to dequeue a cell for us. If a reusable cell is available, the table view returns one to us. If no cells are available for reuse, the table view automatically creates one behind the scenes. A good place to register a class for cell reuse is in the view controller’s viewDidLoad method (see below).
The subclass that we are about to create is pretty simple. My goal is to show you what happens under the hood and what is required to create a UITableViewCell subclass, as opposed to using static or prototype cells. Create a new Objective-C class, name it MTTableViewCell, and make it a subclass of UITableViewCell. Open the class’s header file and add a public property of type UILabel with a name of mainLabel.
As you can see below, the implementation of MTTableViewCell is not complicated. All we do is override the superclass’ initWithStyle:reuseIdentifier: method. This method is invoked by the table view when it instantiates a class for us. The downside of giving the table view permission to instantiate cells is that you cannot specify the first argument of this method, the cell’s style. You can read more about this on Stack Overflow.
In initWithStyle:reuseIdentifier:, we initialize the main label, configure it, and add it to the cell’s content view. As I explained in the introduction, the latter is very important if you want the custom cell to behave as a regular table view cell.
To put our new class to use, import its header file in MTViewController.m, update the view controller’s viewDidLoad method, and amend the tableView:cellForRowAtIndexPath: method of the table view data source protocol as shown below.
#import "MTTableViewCell.h"
- (void)viewDidLoad {
[super viewDidLoad];
// Register Class for Cell Reuse Identifier
[self.tableView registerClass:[MTTableViewCell class] forCellReuseIdentifier:CellIdentifier];
}
Subclassing UITableViewCell is a much more involved topic than what I discussed in this tutorial. If you want me to write more about this topic, then let me know in the comments below. Don’t forget to run the application in the iOS Simulator to see the subclass in action.
Conclusion
What are the advantages of using a custom subclass as opposed to using prototype cells? The simple answer is flexibility and control. Despite their usefulness, prototype cells have their limits. The main hurdle that many developers face when subclassing UITableViewCell is the fact that it is tedious. Writing user interface code is tedious and few people – if any – enjoy it. Apple created Interface Builder with good reason. It is also possible to create custom table view cells using Interface Builder and load the XIB file at runtime. I usually create complex table view cells in Interface Builder and translate the design to code when I am happy with the result. This trick saves you a lot of time.
Whether you should use Interface Builder to create custom table view cells or design custom UITableViewCell subclasses from scratch really depends on the situation and your preference. It is clear, however, that Interface Builder has become more powerful and the introduction of Xcode 4 meant another great leap forward – despite the early bugs and problems it suffered from.
Static cells seem very nice at first glance, but you will quickly run into limitations. However, you can’t deny that it is a very fast way to prototype an application.
There are many ways to create a mobile application and it is easy to get overwhelmed by the variety of services available, especially to the novice app developer. Here are a few widely used, tried, and true services to help streamline your workflow. Each service or product corresponds to a phase of development, from UI design, to programming, and lastly to testing and marketing!
Prototyping and Design
After you come up with a great app idea, the first step before programming is to design the user interface and create a prototype. This will give you a good idea of how your app will work and allow you to make any adjustments to the user experience.
AppCooker
Not only is AppCooker ($39.99) an excellent tool for creating mockups, it also has many features to help you prepare your app for the App Store. It integrates with DropBox, Box.net, and your photo roll, so you can import icons and other UI assets directly into the prototyping tool. You can create simple shapes with gradient, strokes, and advance fill techniques, as well as have access to almost all the default Apple UI controls. If you are not ready to get into the graphics heavy design, you can opt to use the included AppCooker to create “sketch” themed assets to put together a more rough, yet uniform prototype. AppCooker includes an easy to use dynamic linking functionality to allow you to link as many screens as you want so you can think through all the various use cases you might encounter during the UX design phase.
It is only available on the iPad, but they also have a companion app called AppTaster for the iPhone/iPod Touch. You could send your completed AppCooker prototype files to other users for testing or feedback. Your prototype can also be exported as a linked PDF.
Short for “Prototyping On Paper”, POP is a marvelous blend of low tech and high tech engineered into a beautiful app for iOS. Pop captures your UI sketches with your iPhone’s camera, then let’s you quickly add touch “links” to other captured sketches. Publish your sketched prototype and collect feedback all from within Pop. Pop is great for startups and those who follow lean UI processes, or anyone who wants to iterate through a potential idea without any excessive UI. Pop is free in the iOS App Store.
This is a web script that takes your maximum resolution 1024×1024 icon and sends you a zip file with every resolution required by Apple’s guidelines including retina and all device specific requirements. While this may seem like a simple tool, it saves a lot of time. You would be surprised by how much time is wasted reading through icon size requirements and manually resizing the same image over and over.
FluidUI is an easy to use multi-platform web app that allows the user to create, test, and share mobile user interfaces. It includes elements for iOS, Android, and Windows phones. A great feature about FluidUI is that it can also be used offline with their Chrome app. Fluid UI is free for one project and then pricing is tiered based on expected use.
Sketch is an application for Mac that is a Vector/Pixel hybrid tool and is excellent at creating retina graphics. While this application is primarily a replacement for Photoshop or Fireworks, it is fairly new and includes a variety of developer friendly features such as “export to css” and export for retina options. It’s worth checking out if you contemplate creating your own design for your applications.
Once you have the initial UI and design elements taken care of you might find the following coding and development based tools useful:
SourceTree
Chances are you are using some type of version control for your project, and if you aren’t you probably should be. SourceTree is a free Mac app for Git and Mercurial version control systems. Based on my experiences most iOS developers use Git, although a few use Mercurial or SVN. SourceTree has you covered for whichever source control scheme you use. SourceTree is unique in that it is a GUI for the traditionally complex world of command line version control. It is simple enough for a novice Git user to use effectively, and robust enough even for the most seasoned application developer to find useful. With features such as incoming and outgoing changesets and intuitive branch management you would be surprised at how much time you’ll save in terminal.
If you do web development you may have heard of HTML5 boilerplate, well now there is iOS Boilerplate! iOS Boilerplate is a sort of blank slate of standards compliant code from which you can begin your next iOS project. iOS Boilerplate is not intended to act as a framework, but it does include some solid, widely used third party libraries so that you do not end up reinventing the wheel. You can modify and extend Boilerplate to meet your needs and use it in your personal or commercial apps.
Sometimes presenting your application on a larger display can be useful. Perhaps you would like to demonstrate your latest feature or bug fix by creating a screen capture video without going into full-on video editor mode. AirServer allows your Mac to function as an AppleTV would, taking advantage of the AirPlay protocol and listening for any iOS device capable of broadcasting media, or in this case mirroring the iOS screen exactly as it appears while you are using it. This is particularly useful for group demos, as well as testing applications within the context of a larger group. Air Server is available for both Mac and PC.
Easy APNS is a PHP Script for managing Apple Push Notifications from the backend. If you are interested in the backend portion of the Apple push notification eco system and happen to be familiar with PHP, Easy APNS is a must have for your tool box. It is completely open source and fairly easy to setup. Easy APNS provides a straightforward way to control the entire push notification backend by using a free and open source PHP script.
Slash is an open source library for iOS that adds an extensible markup language for styling NSAttributedStrings. The markup is similar to HTML, but you can define the meaning for each tag, which makes it very extensible.
Displaying attributed strings in iOS6 is fairly straightforward, however programmatically creating them is not. Using them in your app without using interface builder requires tweaking the NSRanges and font attributes. Slash makes working with attributed strings in iOS simpler and it yields cleaner code.
In the past year, we have witnessed an increase in server-side services aimed at mobile developers. These services claim to help with issues such as storage, scaling, delivering content, real-time functionality, and much more. With high reliability and toolsets that decrease development time considerably, it may be time to consider using a server-side service in your next app. Below you will find a short introduction to several of these services.
Firebase
Firebase is a cloud database that advertises itself as a “Scalable real-time backend”. Due to its focus on real-time, collaborative applications, Firebase gives you the ability to create unique experiences, especially in a multi-user or multi-player application. Also for those worried about security, Firebase uses a flexible rules language that allows you to easily write your security logic. It enforces these policies across your application.
Urban Airship is one of the oldest and most trusted services in mobile development. In 2009, Urban Airship opened its doors to thousands of iOS developers by offering a push notification service that is easy to integrate into apps. Since then, the company has innovated its core product and now offers several more products that include geofencing, location targeting, location history, and passbook creation. Urban Airship products allow you to add location-aware features to your applications while giving you the ability to communicate the right message at the right time to your users’ phone.
Kinvey claims to take the hassle out of creating and maintaining your mobile backend. Kinvey is a cross-platform service with rich features that include user management, business logic, data storage, push notifications, large file storage that moves across a CDN, analytics, automatic versioning, and several other features. This is a robust platform that has something for both indie developers as well as Enterprise level customers.
Parse is a feature rich service that helps developers focus on user experience by handling data storage and scaling. Parse also has powerful social and push notification features, and an impressive dashboard to handle it all. Furthermore if you are looking to add mobile commerce to your app, Parse has recently partnered with Stripe to create an open-source application to show you how it’s done. Parse has great features, documentation, and tutorials, and is constantly innovating in this space.
StackMob is a backend as a service that claims their platform “reduces many of the backend challenges associated with building, deploying, and growing a mobile business.” If you’re working on a team project, StackMob’s collaboration tool will make it easy for developers, designers, and clients to work together. Additional functionality includes app analytics, S3 integration, geoqueries, Facebook and twitter integration, and the ability to maintain separate development and production environments within one account.
Testing is an important part of the development process. Ensuring that your application runs smoothly before it is released to the app store saves a lot of time and customer service related emails. There are many services available to test your application, obtain feedback, and get crash reports. Here are a few of those services:
TestFlight
TestFlight is a free on the fly provisioning and rudimentary ad-hoc distribution and testing service. It includes feedback, as well as tracking real time crash notices and in-app user feedback prompts that can be triggered at certain points. It has been around for quite a while and it is fairly robust. TestFlight still has some hiccups when it comes to managing test users provisioning profiles, but it is become almost an industry standard for pre-launch app testing.
Pieceable Viewer is a bit of code that you can add to your compile time development environment. It sets up a web server and uses a VNC like protocol to publish your simulated, recently built iOS application, which is then accessible via the web for viewing and testing purposes. This allows users to view and test the app and provide feedback without having to install the application on their device or even have a device, since the viewer is published to a web address of your choosing. This can be very helpful if you need to show how the app is functioning and are not prepared to provision and distribute a new build of the app.
Tokens simplifies the process of generating and distributing free promo codes that are issued by iTunes Connect whenever a new app or a new version is released. Additionally it tracks who you shared the codes with and whether or not they have redeemed their code. This allows you to optimize the limited number of promotional codes allotted (50) and make sure they are not wasted. Using Tokens is good for the end user as well because it skips the confusing manual redemption process and offers a friendly step by step alternative through the Tokens service. It ultimately makes it easier for the people you want to share your finished application with to download and use it instead of messing around with iTunes.
Smore is the fastest, easiest way to create a minimalist web presence for your new app. Using only the same collateral that is required to have for AppStore submission (Screenshots, Description, etc) you can create a beautiful customizable, sharable mini-site to market your app. Smore provides traffic analytics so you can see how people are discovering your app. Smore is a Freemium web app which should be in every app creators marketing toolbox.
Countly is an open source, self hosted, mobile analytics suite. If you have thought about Google Analytics for mobile or Flurry, then you are familiar with the sort of functionality that Countly can provide as far as in-app analytics go. Countly takes the open source viewpoint and gives you all the server-side code you need to run your very own analytics suite on your own servers. This is useful for many reasons, such as developing your own tracking algorithms based on Countly’s open source standards as well as retaining all rights to the data you have collected. Countly is a powerful piece of software for those who understand its potential.
In this article I have showed you some of the many tools you can use to help make the most efficient use of your time and streamline your development process. You’ll find that these services are useful for any app developer.
This tutorial will build upon the results of a previous Mobiletuts+ tutorial to create an enhanced version of the drawing app in which the thickness of the pen stroke changes smoothly with the speed of the user’s drawing and makes the resulting sketch look even more stylistic and interesting.
Overview
If you haven’t already done so, I strongly recommend that you work through the first tutorial before starting this one. I’ll make passing references to the concepts and code from the first tutorial, but I won’t go into the details here.
In the first tutorial we implemented an algorithm that interpolated the touch points acquired from the user drawing on the screen with his finger, enabling the user to draw on the screen. The interpolation was done with Bezier curve segments (provided by the UIBezierPath class in UIKit), with four consecutive touch points comprising a single Bezier segment. We then performed a smoothing operation on the junction point connecting two adjacent segments to achieve an overall smooth freehand curve.
Also recall that in order to maintain drawing performance and UI responsiveness, we would (at particular instants) render the drawing generated until that point is into a bitmap. This freed us to reset our UIBezierPath, preventing our app from becoming sluggish and unresponsive due to excessive computations from an indefinitely-growing path. We carried out this step whenever the user lifted his finger off the screen.
Now let’s talk about our objectives for this tutorial. In principle, our requirement is straightforward: as the user draws with her finger, keep track of how fast her finger moves, and vary the width of the pen stroke accordingly. The exact relationship between the speed and how thick we want the stroke to be can be modified to achieve different aesthetic effects.
Keeping track of the drawing speed is simple enough; the app samples the user’s touch approximately 60 times per second (as long as there is no slowdown on the main thread) so the instantaneous speed of the user’s touch will be be proportional to the distance between two consecutive touch samples.
The obvious approach that suggests itself would be to vary the lineWidth property on the UIBezierPath class with respect to the drawing speed. However, this simple idea has a couple of issues and ultimately is not be good enough to meet our demands. Keeping with the spirit of the first tutorial, we will implement this approach first, so we can examine its shortcomings and think about iteratively improving it or scrapping it altogether and trying something else. This is how real code development happens anyway!
As we develop our app, we’ll discover that due to the new and more complex requirements, our app will benefit if we move some code to the background- in particular, the bitmap drawing code. We’ll use Apple’s GCD (Grand Central Dispatch) for that.
Let’s dive right in and write some code!
1. First Attempt: A “Naive” Algorithm
Step 1
Fire up Xcode and create a new project with the “Empty Application” template. Call it VariableStrokeWidthTut. Make it a Universal project and check “Use Automatic Reference Counting” leaving the other options unchecked.
Step 2
In the project summary for both devices choose any one mode as the only supported interface orientation, it doesn’t matter which one. I’ve chosen Portrait right-side-up in this tutorial. It’s reasonable for a drawing app to maintain a single orientation.
As discussed before, we’ll start with the simplest possible idea, varying the UIBezierPath‘s lineWidth property, and see what it gives us.
Step 3
Create a new file, calling it NaiveVarWidthView and make it a subclass of UIView.
Replace all the code in NaiveVarWidthView.m with the following:
This code has only a few modifications from the final version of the app from the first tutorial. I’ll only discuss what’s new here. Referring to the points in the code:
(1) We’re creating an off-screen bitmap to render (draw) into as before. However this time we’re doing the off-screen rendering step after every drawing update (that is, after every sampling of four touch points, which comes to around 60/4 = 25 times per second). Why? It’s because a single UIBezierPath instance can have only one value of lineWidth. Since our objective is to vary the line width according to the drawing speed, instead of having one long bezier path to which we keep incrementing points (as in the first tutorial) we need to decompose our path into the smallest possible segments so each can have a different lineWidth value. Obviously, since four points go into defining a cubic Bezier, our segments can’t be any shorter than that. So we’d need to allocate a new UIBezierPath object for every four points received until the offscreen rendering step happens. We’d have to keep allocating memory for new UIBezierPaths potentially indefinitely if we only did the bitmap rendering due to the user lifting her finger off the screen. On the other extreme, we could do the offscreen rendering step after every four points acquired (or around 60/4 = 25 times per second), so that we only need to keep the one instance of UIBezierPath with no more than four points in it, and that’s what we’ve done here. We could also make a compromise, and do the offscreen drawing step periodically but less frequently, creating new UIBezierPath‘s until that step happens.
(2) We’re using a simple heuristic for the “speed” value by computing the straight-line distance between adjacent points as a (rough) approximation for the length of the Bezier curve.
(3) We’re setting the lineWidth to be the inverse of the drawing speed times a “fudge factor” determined experimentally (such that the line has reasonable width at the average drawing speed a typical user is expected to draw with).
(4) After the offscreen bitmap render, we can remove all the points in our UIBezierPath instance and start fresh. To reiterate, this step happens after every four touch points acquired.
Step 4
Paste the following code into AppDelegate.m in order to configure the view controller and assign it a view which is an instance of NaiveVarWidthView.
Build the app and run. Scribble on your device and carefully note the result:
Here the line width is definitely changing with the variation in drawing speed, but the results are not really impressive. The width jumps rather abruptly instead of varying smoothly along the curve the way we would like. Let’s look at these problems in more detail:
As we discussed previously, the lineWidth property is a fixed value for a single UIBezierPath instance and unfortunately can’t be made to vary along its length. Even though we’re using the smallest possible Bezier path (with only four points) still the increment in the stroke width only takes place at the junction of two adjacent paths, giving rise to a “jumpy” rather than continuous variation of width.
The second implementation-related issue is that even though Core Graphics uses the abstract concept of “points” to represent sizes such as lineWidth, in reality our “canvas” is actually composed of discrete pixels. Depending on whether our device has a non-Retina or Retina display, one unit of length in terms of points corresponds to one or two pixels respectively. Despite the fact that like any good vector drawing API, the internal algorithms used by Core Graphics employs some “tricks” (such as anti-aliasing) in order to visually depict non-integral line widths, it is not realistic to expect to draw lines of arbitrary thickness – for example, a line having width (say) 2.1 points will probably be rendered identically to a line of width 2.0 points. Conversely, a perceptible change in rendering only occurs for a large increment in the value of the lineWidth property. Note that the discretization issue is an omnipresent one, but at the same time the right approach or algorithm can make all the difference.
You might be able to improve the results marginally, by tinkering with the lineWidth calculation and so on, but I think this approach is fundamentally limited and so we need to approach this problem with a fresh perspective.
Before moving on to that, let’s address the fact that we’re now doing the offscreen rendering step periodically (up to 25 times per second, in fact) and, more significantly, we’re now doing it in between touch point acquisition. On my iPhone 4, I determined (using a counter and a timer that fired every second) that this was causing the touch acquisition rate to drop from 60-63 per second (for the code from the first tutorial) to around 48-52 per second now, which is a remarkable drop! Obviously this represents a decrease in the app’s responsiveness and further will degrade the quality of the interpolation, making the resultant curve look less smooth. Strictly speaking, we ought to use the Instruments tool to analyze the app’s performance, but for the purposes of this tutorial let’s say we’ve done that and verified that the offscreen rendering operation is what’s consuming the most time.
The issue with our code lies in the touchesMoved:withEvent: method: after acquiring every fourth touch point, the control enters the body of the if statement, executes the time-consuming rendering code, and only after completing it does it exit the body of the method. Until that happens, the UI is unable to process the next touch.
This type of problem, in general terms, is not an uncommon one. We have a time-consuming operation (in this case, off screen rendering) whose result (the bitmap) is useful only after the entire operation finishes. At the same time, we have some short but frequent events that cannot tolerate latency (here, touch acquisition). If we have multiple processors to run our code, we’d like to separate the two “paths of code” so that they can execute independently, each on its own processor. Even if we have a single processor running our code, we’d like to arrange things so that we have two separate code paths, with the execution of the latter interspersed in between the former, and the processor scheduling time for each code path according to its time and priority requirements. Hopefully it’s clear that we’ve just described multithreading (albeit in a greatly simplified way).
One of the clues we have that multithreading would be helpful in this situation is that we are required to draw the image only once for every four consecutive touch points, so in reality- if things are arranged properly- there is more time available for the bitmap drawing code to run than we made use of above.
2. Moving Off-Screen Drawing to the Background With GCD
In general terms, want to move the rendering code away from the main thread that is responsible for drawing on the screen and processing user events. The iOS SDK offers several options to achieve this, including manual threading, NSOperation and Grand Central Dispatch (GCD). Here we’ll be using GCD. It is not be possible to talk about GCD in significant detail in this tutorial, so my idea is to explain the bits we use as I run you through the code. I feel that if you understand the “design pattern” we’re going to be applying and how it helps solve the problem at hand, you’ll be able to adapt it to other problems of a similar nature, for instance downloading large amounts of Internet data, performing some complex filtering operation on an image, etc. while keeping the UI responsive.
Step 1
Create a new UIView subclass called NaiveVarWidthBGRenderingView.
Paste the following code into NaiveVarWidthBGRenderingView.m:
Modify AppDelegate.m to #include NaiveVarWidthBGRenderingView and to set the root view controller’s view to be an instance of NaiveVarWidthBGRenderingView. Simply replacing the string NaiveVarWidthView by NaiveVarWidthBGRenderingView everywhere in AppDelegate.m will do the trick.
Run the code. We haven’t touched our drawing code yet, so there’s nothing new to see. Hopefully you’ll be satisfied knowing that your code makes more effective use of your device’s processing resources and probably performs better on older devices. On my iPhone 4, with the same test described above, the touch acquisition rate went back up to its maximum value (60-63 per second).
Now let’s study the code, with reference to the numbered points in the code listing:
(1) We’ve introduced an array to store incoming points, pointsBuffer. I’ll explain exactly why in a bit. The size of the buffer (100) was chosen arbitrarily; in fact we don’t actually expect this buffer to be filled beyond the the four points belonging to a single Bezier curve segment. But it’s there to handle a certain situation that might conceivably arise.
(2) GCD abstracts threads behind the concept of a queue. We submit tasks (units of work) on queues. There are two types of queues, concurrent and serial. We’ll only talk about serial queues here, because that’s the only type we’re explicitly using. A serial queue actions the tasks placed on it strictly in a first-in, first-out basis, much like a first-come, first-serve queue at a bank teller or the cashier at a supermarket. The word “serial” also indicates that a task will complete before the next one is run, much like a cashier at the supermarket won’t start attending to the next customer before he’s done serving the current customer. Here we’ve created a queue and assigned it the identifier drawingQueue. It helps to bear in mind that all the code we normally write is tacitly executed on the always-existing main queue, which itself is a serial queue! So now we have two queues. We haven’t actually scheduled any work on the drawing queue yet.
(3) The call to the dispatch_async() function schedules on drawingQueue, the bitmap drawing code packaged in the block ^{ … }, asynchronously. “Asynchronous” implies that while the task has been submitted, it’s not executed yet. In fact the dispatch_async() returns control to the caller immediately, in this case the body of the (-)touchesMoved:withEvent: method (on the main queue). This is a fundamental difference to our previous (non-thread based) implementation. Everything before was happening on the main queue and the bitmap drawing code had to be executed to completion before moving on! Make sure you grasp this distinction. With our present implementation, on a multicore device it’s quite possible that the drawing queue would be created on a different core than the one processing the main queue, and both queues processed simultaneously, much like a small supermarket that has two cashiers, providing service to two queues of customers at the same. To understand how things work on a single processor device, consider the following analogy: imagine an office with a single photocopier. The “copy-machine guy” has a load of work that he’s received in bulk, and which he is expected to take the whole day to complete. However, every now and then one of the office employees brings him a few pages to photocopy. Obviously, the smart thing for him to do is temporarily interrupt the time-consuming job that he’s been at throughout the day, and complete the short (but ostensibly urgent) job submitted to him by the employee, and then get back to his previous duties. In this analogy, the employee’s short but urgent photocopy need refers to high-priority tasks that appear on the main queue, such as touch events or on-screen drawing, while the bulk job refers to time-consuming tasks such as downloading data from the Internet or (in our case) drawing to an off-screen buffer. The operating system behaves like the smart copy-machine guy, scheduling tasks on the single processor (the lone photocopier) in a way that best serves the needs of the app (the office). (I hope analogy this wasn’t too cheesy!) Anyway, the actual code submitted to the drawing queue is pretty much what we had in our earlier implementation, except our use of a buffer to which we append our touch points, which I’ll discuss next.
(4) This bit of code has to do with our use of the pointsBuffer array. Consider the hypothetical scenario that an off-screen drawing task gets enqueued on the drawing queue, but for some reason doesn’t get a chance to execute, and in meanwhile on the main queue the next four touch points have been acquired and another drawing task is enqueued on the drawing queue, behind the first one. Who knows, maybe our app was more complex and had other stuff going on at the same time as well. By buffering our touch points, we can ensure that in the case of multiply-enqueued off-screen drawing tasks, the first one does all the drawing, and the ones after it are simply returned due to the points buffer being empty. As I said previously, this scenario of the drawing queue getting backed up with two or more drawing tasks all waiting to be executed might not occur at all, and if it occurs on a persistent basis, then it might mean that our algorithm was too slow for the device, whether because of its complexity, poor design, or our app trying to do too many things. But on the off-chance it happens, we’ve handled it.
(5) All UI update actions must happen on the main queue, which we’ll do with another asynchronous dispatch from within the drawing task on the drawing queue, as in the previous call to dispatch_async(), the task of updating the screen has been submitted, but this doesn’t mean that that the app is going to drop what it’s doing and execute it right then and there.
The pattern that we’ve implemented looks like this in general, and is applicable to many other scenarios:
// Main queue
dispatch_async(aSerialQueue, ^{
// background processing
dispatch_async(mainQueue, ^{
// update UI with results
});
});
In general, writing multithreaded code may not be an easy task. But it isn’t always as complex as you might think (as our own example indicates). It might sometimes seem like a “thankless chore” because there’s no explicit “wow factor” that you get to show at the end of it. But always bear in mind that if your app’s UI runs as smooth as butter then your users are much more likely to enjoy using it and come back to it again and again!
3. Developing a Better Algorithm
In the first iteration, we determined that it was unlikely we would make much improvement in getting a continuous and smooth width-varying pen stroke with the “naive” approach we’d started with. So now let’s try a new approach.
The method I’m going to present here is fairly straightforward, although it does require us to think out-of-the-box. Instead of representing our drawn stroke with one Bezier curve like we were doing previously, we now represent it by the filled region between two Bezier paths, each path is slightly offset on either side of imaginary curve traced out by the user’s finger. By slightly varying the offsets of the control points that define these two Bezier curves, we shall manage to achieve a very plausible effect of a smoothly varying pen width.
The figure above shows the construction described before for a single cubic Bezier segment. The x’s with red circles around them would correspond to the captured touch points and the dashed brown curve is the Bezier segment generated from these points. It corresponds to the Bezier path we drew in our previous implementations.
For each of the four touch points, a pair of offset points are generated, shown at either end of the green line segment. These green line segments are made to be perpendicular to the line segment joining two adjacent touch points. We thus generate two sets of four points on either side of the touch points set, and each of these offset point sets can be used to generate an offset Bezier curve which will lie on either side of the traced Bezier curve (the two solid brown curves). It should be clear from the figure that the width variation is controlled by the distances of the offset points (i.e. the length of the green line-segments). If we fill the region between these two offset curves, we’ve effectively simulated a “stroke” of varying width!
This approach better leverages how vector drawing works inside the Core Graphics/UIKit framework, because it models continuous variation better, compared to the “abrupt” approach of changing stroke width in the “naive” method, and in the bottom line, it works well.
The main step we need to implement is a method that can give us the coordinates of these offset points. Let’s specify the problem more precisely and geometrically. We have a line segment connecting points p1 = (x1, y1) and p2 = (x2, y2), which I’ll denote as p1-p2. We’d like to find a line passing through p2, perpendicular to p1-p2. This problem is easy to solve if we formulate it in terms of vectors. The line segment p1-p2 can be represented by the equation p = p1 + (p2 - p1)t, where t is a variable parameter. Varying t from 0 to 1 causes p to “sweep” from p1 to p2 along the straight line connecting the two points. The two special cases are t = 0 corresponding to p = p1, while t = 1 corresponds to p = p2.
We can split up this parametric equation in terms of x and y coordinates to get the pair of equations x = x1 + t(x2 - x1) and y = y1 + t(y2 - y1), where p = (x, y). We need to invoke a theorem from geometry that states that the product of slopes of two perpendicular lines is -1. The slope of the line through (x1, y1) and (x2, y2) is equal to (y2-y1)/(x2-x1). Using this property and some algebraic manipulation, we can work out the end points pa and pb of the line perpendicular to p1-p2, such that pa and pb are an equal distance from p2. The length of pa-pb can be controlled by a variable that expresses the ratio of the length of this line to p1-p2. Instead of writing out a bunch of messy equations, I’ve drawn a figure that should clarify everything.
Step 1
Let’s implement these ideas in code! Create FinalAlgView as a subclass of UIView and paste the following code in it. Also, don’t forget to modify AppDelegate.m to use this class as the view controller’s view:
Let’s study this code, again with reference to the numbered comments:
(1) LineSegment is a simple C structure that has been typedef‘d to conveniently package the two CGPoints at the end of a line segment. Nothing special.
(2) The offsetPath is the path we’ll fill and stroke to achieve our variably thick pen stroke. It’ll consist of a closed path (meaning it’s first point will be connected to the last one so that it can be filled), consisting of two Bezier subpaths offset to either side of the traced path plus two straight line segments connecting the corresponding ends of the two subpaths.
(3) Here we’re dealing with the special case of the first touch when the user puts his finger on the view. We won’t create offset points for this first point.
(4) This is the factor used to relate the speed of the drawing (taking the distance between two touch points as representing the user’s speed). The function len_sq() returns the squared distance between two points. Why the squared distance? I’ll explain that in the next point. As before, FF is a “fudge factor” that I decided upon after trial-and-error in order to get visually pleasing results. The clamp() function keeps the value of the argument from going below or above set thresholds, to prevent our pen stroke from becoming too thick or too thin. Again, the values of LOWER and UPPER were chosen after some trial-and-error.
(5) We create the method (-)lineSegmentPerpendicularTo:ofRelativeLength: to implement the geometrical idea that our approach is based on, as discussed earlier. The first argument corresponds to p1-p2 from the figure. From the figure, observe that the longer p1-p2 is, the longer pa-pb will be (in absolute terms). So by making f inversely proportional to the length of p1-p2, we’ll “cancel out” this dependence on length, so that, for example, f = 0.5/length(p1-p2) would make pa-pb have length 1 point, independent of the length of p1-p2. To make it so that pa-pb‘s length varies according to the length of p1-p2, I’ve divided by p1-p2‘s length again. This is the motivation for the inverse squared length factor from the previous point.
(6) This just constructs the closed path by joining together two Bezier subpaths and two straight line segments. Note that the subpaths comprising the offsetPath have to be added in a particular sequence, such that each subpath begins from the last point of the previous one. Note in particular the direction of the second cubic Bezier segment. You might trace out the shape of a typical offsetPath by following the sequence in the code to understand how it forms.
(7) This just enforces continuity between two adjacent offsetPath‘s.
(8) We both stroke and fill the path. If we don’t stroke, then adjacent offsetPath segments sometimes appear non-contiguous.
Step 2
Build the app and run it. I think you’ll agree that the subtle width variation of the sketched line as you draw makes for an interesting stylistic effect.
For comparison, here’s what the end effect was with the algorithm with fixed stroke width from the original tutorial:
Conclusion
We started with a freehand sketching app, enhanced it to incorporate multithreading, and introduced a stylistic effect on the drawing algorithm. As always, there’s room for improvement. Touch ending (i.e. when the user lifts their finger after having drawn) needs to be handled so that the sketched line terminates gracefully. You might observe that if you’re scribbling very fast in a zigzag sort of a pattern, then the curve can become quite pinched at the turning points of the curve. The width variation algorithm can be made more sophisticated so that the thickness of the line varies more realistically, or could simply be messed with to get some fun effects for a kids’ app! You can also vary the properties of the Bezier in each iteration of the drawing cycle. For instance, you can introduce subtle effects by varying the color of the fill and stroke slightly, in addition to the thickness of the stroke.
I hope you found this tutorial beneficial and that it gave you some fresh ideas for your own drawing/sketching app. Happy coding!
In this tutorial series, I’ll show you how to create a Match Shapes puzzle game with the Corona SDK. You’ll learn how to drag objects across the screen and detect when they collide without using the physics engine. The objective of the game is to match the shapes on the stage to its corresponding container. Read on!
For those unfamiliar, the family of Tuts+ sites runs a premium membership service called Tuts+ Premium. For $19 per month, you gain access to exclusive premium tutorials, screencasts, and freebies from Mobiletuts+, Nettuts+, Aetuts+, Audiotuts+, Vectortuts+, and CgTuts+. You’ll learn from some of the best minds in the business. Become a premium member to access this tutorial, as well as hundreds of other advanced tutorials and screencasts.
CocoaPods is an easy-to-use dependency management tool for iOS and OS X development. Even though CocoaPods is fairly clear and simple to use, I feel that many cocoa developers are reluctant to give it a try. In this tutorial, I will show you how to get started with CocoaPods in less than five minutes.
What Is CocoaPods?
CocoaPods is a dependency management tool for iOS and OS X development. It makes managing third party libraries in an Xcode project easy and straightforward. CocoaPods has gained considerable traction in the cocoa community, and as a result, hundreds of open source libraries now provide support for CocoaPods. To get an idea of which libraries are available through CocoaPods, visit the CocoaPods website and search for some of your favorite third party libraries. Companies like TestFlight, Mixpanel, and Google make their libraries available through CocoaPods.
Why Should I Use CocoaPods?
Working with third party libraries in an Xcode project isn’t always easy and it can often be a pain, especially when integrating non-ARC libraries in an ARC-enabled project. Eloy Durán started the CocoaPods project to ease this pain. Before switching to CocoaPods, I did what most developers do; they manually copy the source files of third party libraries into an Xcode project and compile those files with the rest of the project. Even though this method worked fine, updating a project’s dependencies was a cumbersome and error-prone process.
CocoaPods makes managing a project’s dependencies much easier and intuitive. You only need to specify which dependencies, or pods, you want to include in your project and CocoaPods takes care of the rest. Updating a pod to a new version is as simple as executing a single command on the command line. If you have avoided the command line in the past, now might be a good time to become familiar with it.
1. Installing CocoaPods
Another great feature of CocoaPods is that it is distributed as a Ruby gem. This makes installing CocoaPods virtually painless. To install the CocoaPods gem, your system needs to have Ruby and RubyGems installed. Fear not because both Ruby and RubyGems are probably already installed on your computer.
Open a new Terminal window and type gem -v to determine if RubyGems is installed on your system. If you need help installing Ruby or RubyGems, Andrew Burgess wrote a great tutorial about RubyGems on Nettuts+. In case you run into problems installing CocoaPods, chances are that Andrew’s article has the solution to your problem.
Before installing CocoaPods, make sure to update RubyGems by running the following command on the command line:
gem update --system
Depending on your system configuration, it may be necessary to prefix this command with sudo.
If you want to read more about Ruby and Rubygems, Jeffrey Way wrote a nice article about Ruby on Net Tuts+. The tutorial by Andrew Burgess that I mentioned earlier, is part of a in-depth series called Ruby for Newbies. It is definitely worth checking out.
Installing CocoaPods is easy once Ruby and RubyGems are installed. Just run the following:
gem install cocoapods
Once installed, setup CocoaPods by running the pod setup command. During the setup process, the CocoaPods environment is formed and a .cocoapods directory is created in your home folder. This hidden folder contains all the available pod specifications or pod specs.
2. Starting With CocoaPods
Step 1
Instead of explaining how CocoaPods works, it is far easier to show you by creating a simple Xcode project using CocoaPods. Open Xcode and create a new project based on the Empty Application template. Name the project CocoaPods and make sure to enable ARC (Automatic Reference Counting) for the project (figure 1).
Step 2
Now that we created the project, we need to add a Podfile. A Podfile is similar to a Gemfile. It is a text file that contains information about the project and it also includes the project’s dependencies, that is, the libraries that are managed by CocoaPods. Close the Xcode project and open a new Terminal window. Browse to the root of your Xcode project and create a new Podfile by running touch Podfile in the Terminal. The touch command updates the modified date of a file, but it creates the file for you if it doesn’t exist, which is exactly what we want. Make sure to create the Podfile in the root of your Xcode project.
touch Podfile
Step 3
Open the Podfile in your favorite text editor. At the time of writing this tutorial, the current version of CocoaPods is 0.16. In this version, it is required to specify the platform, iOS or OS X. This will no longer be necessary in the next version of CocoaPods (0.17). In this example, we tell CocoaPods that we target the iOS platform as shown below. It is possible to specify the project’s deployment target, but this is optional. The default deployment target is 4.3 for iOS and 10.6 for OS X.
platform :ios, '6.0'
Step 4
With the platform specified, it is time to create the list of dependencies for the project. A dependency consists of the keyword pod followed by the name of the dependency or pod. A pod is just like a gem, a dependency or library that you wish to include in a project or application. You can optionally specify the version of the dependency as shown below.
pod 'SVProgressHUD', '0.9'
By omitting a pod’s version number, CocoaPods will use the latest version available (that is what you want most of the time). You can also prefix the version number with a specifier to give CocoaPods more control over which version to use. The specifiers are mostly self explanatory (>, >=, <, <=), but one specifier, ~>, might seem odd if you are not familiar with RubyGems. I’ll explain what ~> means in the example shown below. The pod specification shown below indicates that the version of the AFNetworking library should be between 1.0.1 and 1.1 but excluding the latter. This is very useful if you want to give CocoaPods the ability to update a project’s dependencies when minor releases (bug fixes) are available, but exclude major releases that can include API changes that might break something in your project.
pod 'AFNetworking', '~> 1.0.1'
A dependency declaration has a lot more configuration options, which can be set in the Podfile. If you want to work with the bleeding edge version of a library, for example, you can replace a pod’s version number with :head as shown below. You can even tell CocoaPods what source to use by specifying the git repository or referring CocoaPods to a local copy of the library. These are more advanced features of CocoaPods.
pod 'AFNetworking', :head
pod 'SVProgressHUD', :git => 'https://github.com/samvermette/SVProgressHUD'
pod 'ViewDeck', :local => '~/Development/Library/ViewDeck'
Step 5
With our list of dependencies specified, it is time to continue the setup process. Update the Podfile as shown below and run pod install in the Terminal. Make sure to run this command in the root of your Xcode project where you also created the project’s Podfile.
platform :ios, '6.0'
pod 'ViewDeck', '~> 2.2.2'
pod 'AFNetworking', '~> 1.1.0'
pod 'SVProgressHUD', '~> 0.9.0'
pod 'HockeySDK', '~> 3.0.0'
pod install
CocoaPods inspects the list of dependencies and installs each dependency as specified in the project’s Podfile. The output on the command line shows you what CocoaPods is doing under the hood. The last two lines are especially interesting to us; CocoaPods creates an Xcode workspace and adds our project to that workspace. It also creates a new project named Pods, adds the listed dependencies to that project, and statically links that with a library. The newly created Pods project is also added to that same workspace (figure 2).
Resolving dependencies of<code>./Podfile'
Updating spec repositories
Cocoapods 0.17.0.rc7 is available.
Resolving dependencies for target<code>default' (iOS 6.0)
Downloading dependencies
Installing AFNetworking (1.1.0)
Installing HockeySDK (3.0.0)
Installing SVProgressHUD (0.9)
Installing ViewDeck (2.2.5)
Generating support files
2013-03-28 11:13:54.663 xcodebuild[1352:1207] XcodeColors: load (v10.1)
2013-03-28 11:13:54.665 xcodebuild[1352:1207] XcodeColors: pluginDidLoad:
[!] From now on use<code>CocoaPods.xcworkspace'.
Integrating<code>libPods.a' into target<code>CocoaPods' of Xcode project<code>./CocoaPods.xcodeproj'.
When running the pod install command, CocoaPods tells us that we should use the CocoaPods.xcworkspace from now on. Xcode workspaces are a key component of how CocoaPods works. An Xcode workspace is a container that groups one or more Xcode projects, which makes it easier to work with multiple projects. As we saw earlier, CocoaPods creates a new project for you, named Pods, and it automatically adds this project to a new Xcode workspace. The project that we started with is also added to this workspace. The key advantage is that your code and the project’s dependencies are strictly separated. The dependencies in the Pods project are statically linked with a library that you can see in the Build Phases tab of our project (figure 3). It is important thing to remember to use the workspace created by CocoaPods instead of the project we started with. Our project is now ready for us and we can start using the libraries specified in the project’s Podfile.
3. Updating Dependencies
It is obvious that CocoaPods makes it easier to quickly set up a project with several dependencies. However, updating dependencies becomes easier as well. If one of your project’s dependencies received a major update and you want to use this update in your project, then all you need to do is update your project’s Podfile and run pod update on the command line. CocoaPods will update your project’s dependencies for you.
4. Command Line
The CocoaPods gem has many more tricks up its sleeve. Even though you can use the CocoaPods website to browse the list of available pods, the CocoaPods gem also lets you list (list) and search (search) the available pod specs. Open a new Terminal window and enter the command pod search progress to search for libraries that include the word progress. The advantage of searching for pods using the command line is that you only see the information that matters to you, such as the pod’s source and the available versions.
Run the pod command (without any options) to see a complete list of available options. The CocoaPods manual is excellent, so make sure to read it if you run into problems or have any questions.
Conclusion
You should now have a basic understanding of CocoaPods so that you can use CocoaPods in your development. Once you start using CocoaPods, chances are that you won’t be able to live without it. Even though I had heard of CocoaPods many times, it took me some time to make the transition. However once I started using CocoaPods, I haven’t created a single project without it. I am sure you’ll love it once you’ve given it a try. Not only will it save you time, it will also make your life as a developer a bit easier.
Each month, we bring together a selection of the best tutorials and articles from across the whole Tuts+ network. Whether you’d like to read the top posts from your favourite site, or would like to start learning something completely new, this is the best place to start!
Firefighters do so much to help keep us safe. In this tutorial, we will honor our firefighters by creating a digital painting that depicts a firefighter coming to the rescue. Let’s get started!
With so many artists competing for the same work, getting noticed can be a challenging task. As the editor of Psdtuts+, I have had the opportunity to interact with some exceptionally talented artists and designers from all over the world. This proximity to so many artists has given me a unique perspective, and over the years, I have been able to make a lot of observations about what it takes for artists to increase their visibility and to raise their online profiles. In this article, I wanted to share several reasons why you might not be getting as much attention as you should.
What child wouldn’t love a real-life teddy bear to have as a friend? In this tutorial, we will show you how to create an adorable children’s illustration using digital painting techniques in Photoshop. Let’s get started!
I’m pleased to release our first ever round table, where we place a group of developers in a locked room (not really), and ask them to debate one another on a single topic. In this first entry, we discuss exceptions and flow control.
Not too long ago, I built a handful of generators for Laravel, which ease the process of various tasks. Today, thanks to help from Gaurav Narula, we’re turning things up a notch with the release of a new Sublime Text plugin that leverages the power of Artisan and the generators from directly within your editor.
The command line can either be your best friend, or your worst enemy. It simply depends on how you use it, and what you use it for. If you’re one of the many people who cringe at the mere thought of using the command line, then you’ve come to the right place!
In this tutorial we are going to draw a deer with custom Art Brushes, Graphic Styles and Blends in Adobe Illustrator, all of them created by us, so let’s get started.
Learn how to create text labels using Illustrator effects. Once you create this effect, you can apply it to any text and the size will update automatically.
In this tutorial you will learn how to create a series of minimalist and stylized birds and create a seamless pattern from them. The tutorial is aimed at users of Adobe Illustrator CS5 and below, since in CS6 creating patterns has become easier and more intuitive. In the second part we will then use this pattern for what could be a poster background and apply some effects to give it a retro look.
Let’s look at an alternative approach for displaying logos on a web page. Normally, you’ll approach the challenge by using an img tag. Perhaps you’ll use image replacement through CSS, perhaps you’ll even venture into SVG files, but have you considered what’s possible by designing your own web font ligature?
Perhaps you, or someone you know, has experienced the difficulties of computer interaction for the impaired. In general, operating systems and software suites have made provisions for accessibility for hearing-impaired audience, vision-impaired audience, and internationalization; however, the open web hasn’t caught up as quickly. Many sites ignore accessibility completely.
There’s a very good chance you know what Adobe Fireworks is, especially if you regularly use Photoshop or any of the other Adobe products. There’s an equally good chance you’ve never really taken the opportunity to see how it can help your web design workflow, even if you’ve always meant to. This session is here to put that right. Follow Leigh Howells as he demonstrates exactly what Fireworks can do for you in the real world.
The explosion of filmmaking since its democratization in 2008 has meant increasingly cheap ways of exploring the possibilities of motion pictures. One of the most popular cameras in recent times for beginners to start shooting on is the Canon Rebel series. Today, I’m going to take a look at the basics of getting your Rebel rolling, and provide some ideas on how to improve and develop those first attempts.
Panography was created to depict the way we naturally see. The way our eyes pick up on the details of a place or subject, then arrange them into a single image. The scale of detail you choose to create depends on the final image you see. Today, we’re going to take the style and techniques of panography and apply it to images we’ve already taken.
The Zone System is an approach to a standardized way of working that guarantees a correct exposure in every situation, even in the trickiest lighting conditions such as back lighting, extreme difference between light and shadow areas of a scene, and many similar conditions that are most likely going to throw off your camera’s metering giving you a completely incorrect exposure.
We’re kicking March off right with a great tutorial from Shaun Keenan where you’ll learn how to create an advanced multi-character rig for Futurama’s Bender. In this series, Shaun will show you how to create a truly versatile and production friendly rig in Maya capable of a staggering set of options and a huge amount of control.
Today we’re excited to launch an ambitious new tutorial series from Stefan Surmabojov spanning both Cgtuts+ and Aetuts+. Over the course of this seven part series, you’ll be guided through the entire process of creating a high quality advertising spot for a Razer gaming laptop from the ground up.
In this tutorial Aleksey Vozeneski will walk you through how to model a low poly tree in Cinema 4D and how to achieve that paper look that is becoming quite popular these days. He’ll then move on and show you how to bake Global Illumination, so that render times can be significantly reduced and your scene will be production ready.
In this tutorial we are going to create a beautiful Magnifying Glass in After Effects. Using just one null object as a Controller we will be able to change all the parameters: Size, Distance, Rotation, Blur, Shadow, and Background. Everything is going to be automatic, and this will save you a bunch of time when animating!
Do you want to create a fun video for kids in just a few minutes? Check out this free After Effects® project file that will help you create words in any language and animate them in a colorful way. The project template is set up with a visual interface that makes it easy for beginner After Effects® users to customize. Take a look at the video tutorial for an overview of the template options, and get ready to spark your viewer’s imagination!
Pixel drawings are really small compared to an HD video canvas. This tutorial explores the common failure points when bringing your pixel artwork into a 1080p comp in After Effects. Scaling and looping are covered as well as a workflow for preparing your frames for animation in the free sprite editor ASEprite and Adobe Photoshop.
For some people it is not practical to use loudspeakers to mix their music tracks. It might be that their neighbours are easily disturbed, or their acoustic environment is not up to scratch. Despite the fact it is not usually recommended, many people make decent mixdowns using only headphones.
Ableton Live 9 has arrived, and among its new features is the ability to convert melody and drums into MIDI. In this tutorial we show you how to use these features, as well as some of their limitations.
Studio One 2.5 is quickly becoming my go to DAW in my studio. If you are new to Studio One or are using the free version PreSonus has made available, I want to show you how you can quickly set up you MIDI controller inside of Studio One, if it is not in the list of preset devices.
We’ve added a new page to the site, which will help WordPress pros grab top quality software, tools and services. It’s filled with our favorite WordPress resources. You can jump straight over to our Recommended Resources page here on Wptuts+ or read on for further information.
In this post, we’re going to review a few concepts around jQuery and WordPress to make sure that we, as developers, are not only working to build our products correctly, but that we also know how to properly diagnose problems as they arise in our customer’s sites.
In this two part series, we’re going to take a look at what cross-site scripting really is, its dangers, how it impacts WordPress development, and then practical steps that we can take for testing our themes and plugins.
With the release of iOS 4, the Core Location framework received a significant update by providing support for geofencing. Not only can an application be notified when the device enters or exits a geofence, the operating system also notifies when the application is in the background. This addition to the Core Location framework fits neatly in Apple’s endeavor to support multitasking on iOS. This opens up a number of possibilities, such as performing tasks in the background through geofencing and that is exactly what this tutorial is about.
In this tutorial series, you’ll learn how to create an unblock puzzle game. The objective of the game is to clear the path for the square to get out. Read on!
Rockable Press is proud to present our latest release: Decoding the iOS 6 SDK. Written by five seasoned iOS experts and packed with almost 500 pages of essential iOS 6 development fundamentals, this great new eBook will quickly get you up to speed with the iOS 6 SDK and all the fundamental changes that occurred to Xcode and the iOS device landscape in 2012. Get your copy now!
Almost every major game released these days is made in 3D or uses a heavy amount of 3D assets. While there are still many games made in 2D, even platforms like Flash are now integrating 3D. In this bumper-length article I am going to explore what separates games from other mediums that use 3D art, and cover some of the important topics to consider when making 3D art for games.
Flixel is a free and open source 2D game development framework written by Adam “Atomic” Saltsman (Canabalt, Hundreds) in AS3 for making Flash games. It is a very mature, flexible and robust library. In this article, we’ll introduce you to the platform and its capabilities, and share tutorials, plugins, and suggestions to get you started developing games with it.
Would you believe me if I told you that after you finish reading and participating in the activities established in this article, you will have a game designed and ready to be developed? Yes, I know it sounds inconceivable, but trust me – this series of unconventional exercises will explain the workflow of designing a brand new game from zero to pitch.
Have you wondered if it was possible to merge similar PDF files together into one file without downloading third-party software? Well, turns out you can–and it’s really simple, too! In this screencast we show you how to easily merge your PDFs into one document using Preview.
If you’re wanting to reduce your paper clutter and digitise your old bank statements and receipts, a Doxie scanner is definitely the way to go. In this guide we’ll show you how to get the most from your Mac and Doxie.
Hazel, a folder monitoring application, has long been a favorite among many a Mac enthusiast. Hazel will automatically take action on your files, using the rules you create, keeping your folders in order. If you’ve wished that all of your downloaded music or any other sort of files would just do what you wanted them to, using only the power of your mind, well, this is the next best thing. We’ll look step-by-step at how to create a rule from scratch and then set up nine rules you can customize for your needs.
I’ve been thinking about a simple and contemporary way to add a little Easter decoration to my home as I don’t really want bunnies and eggs everywhere! These mini succulent egg decorations fit the bill nicely for me with a bit of macrame, some dip-dying, cute succulents, and of course eggs.
Do you have a favourite sweater that you can’t wear anymore because it (a) shrunk in the dryer; (b) got a stain or (c) developed a hole? If you can answer ‘yes’ to one (or more) of the above, don’t despair. With this tutorial you will learn how to transform and re-purpose an old sweater (or a charity shop find) into a very sweet Easter bunny plush. Let’s get started.
Easter is just around the corner and this pretty project will inject some colour into your decor. In this tutorial you’ll learn how to dye brightly-coloured eggs and embellish them with sweet silhouettes. You can download our free silhouette pattern to make the project super-easy. Read on for the full instructions after the jump.
Blogging is not a hard-sell environment. Readers expect to get useful information in posts, not pitches to hire you. So what can you write about? Quick tip: Provide useful or interesting information your prospects can use, and your readers will keep coming back — and some may end up becoming your clients. Here are 40 specific ideas for quick-and-easy blog topics that will attract quality prospects and then keep them interested.
Social networking is all about staying in touch with friends and making new contacts. On Google Plus, you add friends by putting them into your circles. You meet new people by hanging out in Google Plus communities. In this article, I give you a freelancer’s guide to who you should add to your circles, how to meet new people in communities, and how to use communities and circles as a marketing tool.
There are a lot of options on what incentives you can offer to potential readers. Before leaping in to creating any type of incentive piece, make sure you know exactly what will attract your audience. If necessary, survey a few of the people you’d most like to sign up for your list about what they really need.
In this course, we’ll review the process of building custom WordPress widgets using the WordPress API and advanced development techniques. This course begins at ground zero, and assumes that you have no experience with WordPress widget development. Stick with me for a few hours, and you’ll learn a lot! Let’s get to it.
In this course, join me, Christopher Roach, as I walk you through the creation of a simple Hacker News clone. Along the way, you’ll learn all the basics, including working with views, templates, the ORM, and even some of the more powerful features of the framework, like setting up the admin app and handling AJAX calls.
Whether you’re an ambitious illustrator or an experienced designer, almost everyone wants to improve their traditional drawing skills. Kirk Nelson is here to do just that! As an experienced digital painter and designer, Kirk walks you through the building blocks of digital drawing, shapes and shading, composing and perspective, and much more. Grab your tablet or stylus and let’s get started.
The importance of user experience should never be underestimated when developing mobile applications. If an app fails to deliver during the first experience, the consequence is that you lose a customer. Context circles allow you to better understand customers and craft a more compelling design!
A Perfect Experience Does Not Exist
It’s impossible to develop the ultimate user experience because every user experiences an application differently. This is due to cultural and personal aspects such as the society in which we live, personal tastes, and other various factors that affect the perception of an application.
In short, there are different ways to use an application and it isn’t always easy to understand how the user experiences your mobile application. Context circles are a tool that allow us to better understand users and anticipate the effect while developing applications.
Three Contexts
A context circle is actually a thought process. Context circles are a way to do research before you actually design your application or begin coding. People often underestimate the importance of strong research before an application is built. During this kind of research, a member of the target group you’re building your application for is usually involved (for example, a teenager or elderly woman). It is also perfectly possible to research with only a pen, paper, and common sense.
By investing an hour or two to decide for yourself how exactly the application should work and what exactly the desired reaction of users are, you can discover in a very early stage of the development process some of the pitfalls and strengths of your application. The key is to be critical and test your application up against reality. The use of the context circle method is extremely useful for this.
In this article, we’ll have a look at three important context circles:
Physical context: Anticipates the physical setting and activity level of the user.
Technological context: This covers the design, hardware, operating system, and all other technological factors that influence how the user perceives your application.
Social context: This is for promotion and the social aspects of the internet that should always be used in a meaningful way.
Physical Context
As a developer, you are often too focused on only your application, while the entire user experience depends on external factors. An important first step to improve the user’s experience is to understand how the user intends to utilize your application. In which environment does he or she uses it? At home? During travel? Are they in a hurry when the application is used? What external factors may cause the experience to be interrupted or cancelled? In short, understand the physical context the user might be in while he uses your application.
Let’s make this concept a bit more specific. For example when you are playing a game on your smartphone, the chances are quite high that you are waiting on someone or that you’re bored. However, you could be interrupted at any given moment. Imagine you’re waiting for a train while playing a game and suddenly the train arrives, this means the experience of the game ends because you need to enter the train. However, you want the user to continue using the game he’s playing at a later moment. Therefore you need to develop a pause button that is integrated in the game interface or perhaps just automatically save the game context when the app is closed. By creating scenarios like these, you can anticipate problems that may arise. After all, nobody wants to lose progress in a game because of an interruption!
In general people look at the physical context from two different angles: The setting (how much noise in the background, light from the sun, the room you’re in, distractions, other people around you, etc.) and your activity (walking, driving, waiting for the bus, waiting in a line of people, cooking, shopping, etc.).
In short, if you predict why the user uses your application and in what situation he is most likely using the application, you can anticipate problems that the physical setting creates. As a developer, you are often focused on just your own application, but you must also remember that the entire user experience also depends on external factors.
The following questions will help you think about the physical context of your app:
In what location would the app be used the most?
What location based factors could disrupt the user’s interaction?
Are there any ways my application could anticipate or respond to these interruptions?
Will the user be multitasking while using the application?
How can user activity (e.g. walking) disrupt the experience?
Is there anything the app can do to anticipate these user activities?
How can my application leverage the user’s location?
Technological Context
Knowledge about the user makes your design and application stronger. Another important question to consider is what technology the user uses and technological knowledge the user has. Design choices are incredibly important. For example there’s quite some fuss at the moment on skeuomorphism compared to flat design. A general rule is that the user should understand the interface of your application in just a few seconds. To summarize, keep your interface comprehensible.
The importance of testing should not be underestimated. For example it’s possible that you perfectly understand the meaning of a newly designed icon, but your target audience may not because they have never seen anything like it before.
Usability testing in the form of paper prototypes is always useful during the development of an application. Doing so is fast, it’s cheap, and you gain a lot of information through interaction with people from your target audience that may end up using your application. Knowledge about the user makes your design and application stronger. Paper prototyping, as the name suggests, is a prototype drawn on paper. Users interact with the paper sketches and you can mimic the actions of your application, and understand why a user makes certain choices. It gives you feedback on your application very early on in the design process.
The choice of which operating platform you initially develop for your app is also a decision to make while you research the technological context of your application. You must ask yourself some specific questions, such as how you’ll use certain hardware, how heavy the application will be for the battery, and so on. An application which is expected to be light (for example a simple to-do application) shouldn’t use a lot of battery power. Users won’t like apps that use up a lot of battery power and may end up deleting the application. Again, keep the full context in mind as to what the user finds important when it comes to the technological aspects of your application.
Some simple questions you need to answer while thinking of the technological context are:
What operating systems should the app support?
What are the strengths of each supported operating system?
What kind of device features or sensors will the app use?
How much of the device’s capacity should the app consume?
What are the technological expectations of my target market?
How can I provide what the targeted audience wants?
Does my application require an Internet connection?
How much Internet data will it send/receive?
How can I decrease the amount of data transfer required?
How can I protect stored user data?
Social Context
Use social media in such a fashion that it creates added value. The social context is undoubtedly the most challenging context to examine. The world is primarily interconnected through the internet. The influence of various social media and community websites should not be underestimated by the developer.
In the social context we examine how the social aspect of applications and the Internet can be used in a meaningful way. A share or like feature seems the norm nowadays, but you should also ask yourself whether it is relevant and generates added value for the user (or you).
Many different personal factors are included in the social context:
What’s the goal of the user?
What’s the purpose of your application?
How does the user interact with the application?
How much attention does the application require from the user?
How much attention will the user generally pay while using the application?
What functionalities does your application have?
How will the user use these functionalities?
How might the user use a functionality in a way that isn’t intended?
How does the user react to the interface?
Does the user need to be connected to the internet for specific features?
When people think of the social context of their application they usually think of promotion as well. Promotion of an application can be tackled in many different ways and it’s a good idea to think about how you’ll promote it to potential users. Doing context circle research helps to find out the weak and strong aspects of your application, and naturally you want to use your strengths to promote your application. It’s also important to think about different options in case certain parts of your promotion strategy fail.
Conclusion
Are you about to develop an application or are you in process of developing one? Think of these different contexts and how different factors may impact your application. Adequate research is required to develop an application that’s actually concerned with the user’s problems. After all, you should understand why a user would want to download your app in the first place before you start to build it! In short, your applications should be designed by humans and for humans!
The Google Maps API for Android provides developers with the means to create apps with localization functionality. Version 2 of the Maps API was released at the end of 2012 and it introduced a range of new features including 3D, improved caching, and vector tiles. In this tutorial series, we will create an app that uses Google Maps for Android V2 in conjunction with the Google Places API. The app will present a map to the user, mark their current location and nearby places of interest, and will update when the user moves.
This tutorial series about Using Google Maps and Google Places in Android apps will be presented in four parts:
The processes required to integrate Google Maps and Google Places with Android apps are relatively complex and not suitable for beginners, so for the purposes of this tutorial it is assumed that readers have already completed at least a few basic apps in Eclipse.
This is a snapshot of the final app:
1. Integrate Google Play Services
Step 1
Google Maps Android API V2 is part of Google’s Play services SDK, which you must download and configure to work with your existing Android SDK installation in Eclipse to use the mapping functions. In Eclipse, choose Window > Android SDK Manager. In the list of packages that appears scroll down to the Extras folder and expand it. Select the Google Play services checkbox and install the package.
Step 2
Once Eclipse downloads and installs the Google Play services package, you can import it into your workspace. Select File > Import > Android > Existing Android Code into Workspace then browse to the location of the downloaded Google Play services package on your computer. It should be inside your downloaded Android SDK directory, at the following location: extras/google/google_play_services/libproject/google-play-services_lib.
2. Get a Google Maps API Key
Step 1
To access the Google Maps tools, you need an API key, which you can obtain through a Google account. The key is based on your Android app debug or release certificate. If you have released apps before, you might have used the keytool resource to sign them. In that case, you can use the keystore you generated at that time to get your API key. Otherwise you can use the debug keystore, which Eclipse uses automatically when you generate your apps during development.
The API key will be based on the SHA-1 fingerprint from your debug or release certificate. To retrieve this you will need to run some code on the Command Line or in a Terminal. There are detailed instructions in the Google Developers Maps API guide for different operating systems and options. The following is an overview.
To use the debug certificate stored in the default location, run the following in a Terminal if you’re on Linux:
You will need to alter the path to the debug keystore if it’s in a different location. On the Windows Command Line use the following for the default location, amending it to suit your C-Drive user folder:
If you want to use a release certificate, you first need to locate your release keystore. Use the following to display the keystore details, replacing “your_keystore_name” with the path and name of the keystore you are using:
keytool -list -keystore your_keystore_name
You will be prompted to enter your password, then you should see the aliases associated with the keystore. Enter the following, amending it to reflect your keystore and alias names:
Whether you’re using a debug or release certificate, you will be presented with several lines of output. Look for the line beginning “SHA1″ in the “Certificate fingerprints” section; it should comprise 20 HEX numbers with colons between them. Copy this line and append your app’s intended package name to the end of it, storing it for future reference. For example:
Now you can use your certificate SHA-1 fingerprint to get an API key for Google Maps. Sign into your Google account and navigate to the APIs Console. If you haven’t used the console before it will prompt you to create a project.
Go ahead and create one, then rename it if you wish by selecting the drop-down list in the top left corner, which may have the default name “API Project” displayed on it.
Choose Rename and enter your chosen project name, which does not matter for the purposes of this series.
Step 3
The APIs console manages access to many Google services, but you need to switch on each one you want to use individually. Select Services from the list on the left of the APIs console. You should see a list of Google services, scroll down to Google Maps Android API V2 and click to turn it on for your account.
Follow the instructions to agree to the API terms.
Step 4
Now we can get a key for the app. Select API Access on the left-hand-side of the API console. You may already see a key for browser apps, but we need one specifically for Android apps. Near the bottom of the page, select Create new Android key.
In the pop-up window that appears, enter the SHA-1 fingerprint you copied from your certificate (with your package name appended) and click Create.
The API Access page should update with your new key. In the new Key for Android apps section, copy the key listed next to API key and store it securely.
Tip: If you decide, for example, to use the debug certificate at first and then the release certificate, you will need to obtain a separate API key for the release certificate- just follow the same process with the SHA-1 fingerprint for your release certificate when you are ready.
3. Create an Android App
Step 1
We are finally ready to create an app! In Eclipse, create a new Android Project (File > New > Project > Android Application Project). Enter your chosen package and application names. Remember the package name you used to generate the Maps API key. In the source code, download the target SDK version 17 with a minimum of 12. Let Eclipse create a blank Activity for you. In the download we use MyMapActivity and activity_my_map for the layout.
Step 2
Although we added the Google Play services package to the Eclipse workspace, we still need to setup this particular app to use it. Select your new project in the Eclipse Package Explorer and configure its Properties (right-click > Properties or Window > Properties with the project selected). Select the Android tab and scroll to the Library section, then choose Add.
Select the Google Play Services library to add it to your project.
Choose Apply before exiting the Properties window.
4. Get Ready for Mapping
Step 1
Now we can get the app ready to use mapping functions by adding the appropriate information to the Project Manifest file. Open it and switch to the XML tab to edit the code. First let’s add the Google Maps API key inside the application element, using the following syntax:
Now we need to add some permissions to the Manifest. Inside the Manifest element but outside the application element, add the following, amending it to include your own package name:
The Maps API is associated with lots of permissions, the need for which of course depends on your own app. The Developer Guide recommends adding the following to any app using Maps:
This is a Map Fragment. We give it an ID so that we can identify it in Java and provide (optional) initial values for camera tilt and zoom. You can specify lots of settings here, including latitude/longitude, bearing, and map type, as well as toggling the presence of features such as compass, rotation, scrolling, tilting, and zooming. These options can also be set from Java.
If you let Eclipse create your main Activity class, it should already have the following code in the onCreate method. If not, add it now:
At this point you can run your app. You will need to run it on an actual device, as the the emulator does not support Google Play services. Provided that the necessary resources are installed on your test device, you should see a map appear when the app runs. By default, the user can interact with the map in more or less the same way they would with the Google Maps app (rotating, etc.) using multi-touch interaction.
Conclusion
In this tutorial we created an app capable of using Google Maps Android API V2. Don’t be put off by the complexity of the setup process, as the Java coding itself is not particularly complex. In the following tutorials, we will use the Google Map object to control display and interaction with the map. We will get another API key, this time for the Google Places API. After we retrieve the user location and mark it on the map, we will retrieve information about nearby places of interest, process these results, and mark them on the map as well. Throughout this series we will run through the basics of using Google Maps and Google Places within Android apps, outlining options for further development that you can explore in a wide variety of app types.
Bonjour is a technology that makes the discovery of services very easy. Despite its power and ease of use, it doesn’t receive much attention in the Cocoa community. Bonjour works very well with the CocoaAsyncSocket library, an open-source library that provides an Objective-C interface for working with sockets on iOS and OS X. In this series, I will introduce you to Bonjour and the CocoaAsyncSocket library by creating a simple, networked game. Along the way, I will initiate you into the world of networking by discussing the TCP and UDP protocols as well as sockets, streams, and ports!
Introduction
In this series, we will create a simple networked game. Our primary focus will be the networking aspect of the game. I will show you how to connect two devices using Bonjour and the powerful CocoaAsyncSocket library. The game that we will create allows two people on the same network to challenge each other. The game itself won’t be very advanced, so don’t expect a graphically rich FPS.
In this series, I will not talk about the infrastructure that enables networked applications to communicate with one another. Instead, I will focus on the protocols and technologies that form the foundation of networked applications. A basic understanding of the TCP and UDP protocols, sockets, and streams is invaluable for any developer, particularly those who plan on creating applications that rely on network connectivity. Even if you don’t intend to use Bonjour, I highly recommend reading the rest of this article to get a better understanding of networking.
In this article, I will zoom in on several key components of networked applications. It will help you understand how Bonjour works, what Bonjour is (and isn’t), and it will also make working with the CocoaAsyncSocket library much easier.
Keep in mind that Bonjour isn’t required to develop a networked application. Most Unix-based operating systems, such as iOS and OS X, use BSD sockets as their fundamental network programming interface. On iOS and OS X, the BSD Socket library is readily available. Working with the BSD Socket library, however, is not for the faint of heart and requires an in-depth knowledge of socket programming and the C language. On iOS and OS X, you can also make use of the low-level CFNetwork framework, which is a direct extension to BSD sockets. Apple designed the CFNetwork framework to make networking easier by avoiding direct interaction with BSD sockets. One of the most important advantages of CFNetwork is its built-in support for run-loop integration. CFNetwork is part of the Core Foundation framework and written in C.
A surprising number of iOS and OS X developers are so used to the Objective-C syntax that they shy away from libraries and frameworks written in C. If you are one of those developers, then the CFNetwork framework may seem daunting. However, there is a solution for this, and its name is CocoaAsyncSocket. The CocoaAsyncSocket library makes interacting with sockets easier, and it also provides an elegant Objective-C interface. The current version of the CocoaAsyncSocket library integrates neatly with Grand Central Dispatch (GCD) which makes asynchronous programming a breeze.
Let’s start by taking a closer look at the basics of networking. Without a good grasp of sockets, ports, and streams, even Bonjour and the CocoaAsyncSocket library won’t be of much use to you!
Networking Basics
Under The Hood
Networking is not easy and this is something that won’t change anytime soon. Even though the infrastructure that gives us access to the Internet has changed dramatically during the past several decades, the underlying technologies and protocols have changed very little. The reason is that the services we use daily rely heavily on the underlying logical protocols and much less on the physical infrastructure. In the nineties, most of us browsed the web through a dial-up connection. Nowadays, the majority of people have access to a fast broadband connection, and in the past few years a significant portion of the web has begun to be consumed through mobile devices. In other words, the infrastructure has changed dramatically, but the logical protocols necessary for routing traffic and interacting with applications haven’t changed as dramatically.
Never Change a Winning Team
Another reason that the fundamental technologies and protocols that we use to transmit data on the Internet haven’t changed much is because they have proven reliable, performant, and robust. Those technologies and protocols are well tested and they have proven themselves countless times over the past few decades.
Sockets, Streams, and Ports
As a developer, you have probably heard of sockets, ports, addresses, and the like. If you are not familiar with these terms, then you are in for a treat as I will introduce you to the wonders of sockets, ports, streams, and protocols. To get a basic understanding of networking, it is key that you know the bascis of networking and that includes sockets and its friends.
Local and Remote Sockets
A network connection operates through sockets. A socket is one end of a communication channel between two processes that want to talk to each other. As you might have guessed, a network connection (or interprocess communication channel) has two sockets, one for each end of the channel. A network connection is established by one socket making a connection with another socket, the listening socket, which is listening for incoming connections.
The difference between a local and remote socket is merely semantic. From the perspective of a socket, the socket is the local socket and the socket it is connected to is the remote socket. That makes sense. Right?
While a socket is used to send and receive data, a stream can either read data or write data. This means hat in most cases each socket has two streams, one for reading and one for writing. Even though streams are an important aspect of socket programming, we won’t work with streams directly as streams are managed for us by the CocoaAsyncSocket library.
Establishing a Network Connection
Each socket has an address that consists of the host address and a port number. Both parts are essential to establish a connection between two sockets. To keep things simple, the host address is usually the IP (Internet Protocol) address of the machine while the port number uniquely identifies the socket on the machine. Compare this concept with an apartment complex. The building has an address so people know where to find it. Each apartment in the building has a unique number so visitors of the complex can find the apartment they are looking for.
Transmission Protocols
Transmitting data over the Internet is a complex process, and it has resulted in the creation of two robust protocols for uniformly sending and receiving data: TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). Both protocols are transport layer protocols and part of the Internet Protocol (IP) suite.
I am sure that TCP and UDP ring a bell with most of you. There are several key differences between both protocols and it is important to understand those differences. In this series, we will focus on the TCP protocol. A TCP connection manages a stream of data from one endpoint to another.
Network Reliability
The key differences between TCP anad UDP are speed and how they cope with network reliability. If you want to make sure that what is sent through one end of the connection comes out at the other end, then TCP is your friend. TCP is slower than UDP, but it has a good reason for being slower. Without going into too much detail, it is important to know that TCP establishes and terminates a connection with a handshake to identify both ends of the connection. It also makes sure that each packet that is sent through the channel arrives at the other end. In addition, TCP ensures that the order of the packets is respected.
One of the reasons that UDP is faster than TCP is because it doesn’t require a handshake when establishing and terminating a connection. In addition, the UDP protocol doesn’t care if a packet arrives and it also doesn’t care about the order in which packets arrive. If a packet is dropped along the way, the UDP protocol does not try to resend it as it is not even aware of the packet being dropped. The main concern of UDP is that data is sent through the communication channel as fast as possible.
I am sure that you are beginning to see that TCP and UDP are very different protocols and that each protocol serves a different purpose. UDP, for example, is ideal for streaming live audio and video. Speed is essential in these situations. It doesn’t matter if a packet is dropped along the way. If a dropped packet would be resent, it would arrive late and for live streaming it would no longer be relevant. Online multiplayer games also benefit from UDP. The speed of UDP is more important than its reliability. Packets that arrive too late are no longer relevant and that is the fundamental idea behind UDP – speed over reliability.
TCP, on the other hand, is all about reliability. It is used for email and browsing the web. It is a bit slower, but it will do its very best to make sure that you receive what you asked for. The TCP protocol is very robust and supports resending dropped packets and it also respects the order in which packets are sent. Even though we will be using the TCP protocol in this series, keep in mind that the CocoaAsyncSocket library also supports the UDP protocol.
Client and Server
In terms of networking, there is one more concept you need to understand: the client-server model. In every communication, there is a client and a server. Compare this model with two people people making a phone call. Steven wants to make a phone call to Lucy. There are three fundamental requirements for this to work.
The first requirement is that Steven knows about Lucy and he needs to know Lucy’s telephone number. The same is true for a client trying to connect to a server. The client needs to know about the existence of the server and it needs to know the server’s address.
The opposite, however, is not true. Lucy does not need to know anything about Steven for Steven to call Lucy. In other words, a server does not need to know about the existence of a client for the client to connect to the server.
Once the connection is established, Steven can talk to Lucy and Lucy can talk to Steven. When a client is connected to a server, the client can send data to the server and the server can send data to the client.
This concept of a client and a server will become important when we look at Bonjour in practice in the next article of this series. Let’s conclude this tutorial by taking a brief look at Bonjour.
Where Does Bonjour Fit In?
What is Bonjour and how does it fit in our story? Bonjour is a technology created by Apple and based on Zeroconf. Its primary goal is to make the discovery of services easy. Chances are that you have used Bonjour numerous times without even knowing it. Have you ever used a printer on your local network? Didn’t it strike you that it took almost no effort to use the printer even though it wasn’t physically connected to your computer? Apple’s Remote iOS application also makes use of Bonjour, and so do a lot of other iOS and OS X applications.
Even though Bonjour is a great technology, keep in mind that it doesn’t take care of sending or receiving data. What Bonjour does very well is publishing and discovering services that are on the same local network. In the next article, we will take a closer look at Bonjour’s API’s and we will start building the client-server model that we discussed in this article.
Conclusion
With this introductory tutorial, you should have a basic understanding of networking, the various components involved, and the role of each component. In the remaining parts of this series, we will revisit and use some of these components so it is key that you understand what we covered in this article. In the next article, I will talk more about the client-server model by showing you how to connect two devices using Bonjour and the CocoaAsyncSocket library.