Quantcast
Channel: Envato Tuts+ Code - Mobile Development
Viewing all 1836 articles
Browse latest View live

iOS 8: How to Build a Simple Action Extension

$
0
0

App extensions were introduced during WWDC 14 as a way to extend the functionality of iOS apps to other parts of the system and to allow for better inter-app communication.

To name a few, you can use a Today extension to create a widget that will appear in Notification Center, a Sharing extension that will let the user share to their social network, or an Action extension that lets the user act on current content—either view it in a different way or change it. In this hands-on tutorial, we will build an Action extension from scratch.

Even though the tutorial doesn't require any additional knowledge, I do recommend that you take a look at a few resources if you'd like to learn more about extensions after reading this tutorial.

1. What Are We Going to Build?

We are going to build a simple Action extension called "Read it". The extension will accept text as input and will read the text using the speech synthesis API of the AVFoundation framework. I think this works well for the tutorial, because we are not introducing any third party dependencies or other difficulties.

This is what the extension will look like when it's finished. You can download the result of this tutorial from GitHub.

Our Action extension inside activity view controller

2. Creating an Action Extension

Step 1: Project Setup

Start by launching Xcode 6.1 or higher, and create a new project. Select New > Project... from Xcode's File menu and choose Single View Application from the list of templates.

Click Next and give your project a name of SampleActionExtensionApp. Enter an Organization Identifier and set Devices to iPhone. The language we'll be using for this tutorial is Objective-C.

Step 2: Add Target

Once you've created the project, you can add a target for the Action extension. Select New > Target... from the File menu. In the left pane, select Application Extension from the iOS section, choose Action extension, and click Next.

Set the Product Name to ReadItAction. Also note the other options, specifically the Action Type. I will get to that one in a minute. Click Finish to create the Action extension.

You will now be asked if you want to activate the ReadItAction scheme. Click Cancel, because we will be installing the Action extension by running the containing app instead.

Action Types

There are two types of Action extensions, one with a user interface and one without one. You may be wondering what the benefit is of having an Action extension without a user interface so let me explain.

Action extensions without a user interface act on the current item in a way that changes it. For example, an Action extension could remove red eyes from photos and it doesn't need a user interface to do that. The containing app then has a chance to use the changed content, the enhanced photo in this case.

Action extensions with a user interface can be either full screen or presented as a form sheet. The template Action extension target uses the full screen presentation so that's what we are going to use.

Step 3: Implement User Interface

Now that we have the basics set up, we can start creating the user interface. We'll start with the containing app.

Click the Main.storyboard in the SampleActionExtensionApp group in the Project NavigatorIn the right pane, select the File Inspector and uncheck Use size classes. Note that if you were creating a real app and you'd need to support iPad, it would probably be a good idea to use size classes.

Open the Object Library and drag a text view and a toolbar onto the view. Set the text view's frame to {x:8, y:20, width:304, height:288} in the Size Inspector on the right. As for the toolbar, set its frame to {x:0, y:308, width:320, height:44} in the Size Inspector.

The toolbar contains one bar button. Select it, and, in the Attributes Inspector, set its Style to Plain and its Identifier to Action.

As a final touch, remove the default text of the text view and replace it with "Tap the action button to invoke activity view controller. Then select 'Read it' action and this text will be read by our sample Action extension."

The user interface of the view controller should now should look something like this:

User interface of containing application

Of course, we could have left the containing app blank. After all, we are building a sample app extension so the app doesn't really have to do anything. But I wanted to show how easy it is to invoke the activity controller from inside your app and provide a point where other Action extensions can come in.

When the button in the toolbar is tapped, an activity view controller is presented and we will be able to invoke our Action extension from within there. Another good reason is that if you wanted to publish your extension on the App Store, it has to be part of a real app and obviously the app has to do something in order for it to be approved.

Step 4: Present Activity View Controller

Next, we need to add some code to ViewController.m. Start by creating an outlet for the text view in the view controller's class extension as shown below.

Create an action named actionButtonPressed in which we initialize and present an UIActivityViewController instance and present it to the user.

Head back to Main.storyboard and connect the text view outlet to the text view by pressing Control and dragging from the View Controller object in the View Controller Scene to the text view, selecting textView from the popover menu.

To connect the action method, select the bar button in the toolbar and open the Connections Inspector. Drag from selector, under Sent actions, to the View Controller object, selecting actionButtonPressed: from the popover menu.

With the app's user interface ready and wired up, we can move on to building the Action extension.

Step 5: Implement User Interface

In the Project Navigator, expand the ReadItAction group and click on MainInterface.storyboard. You'll notice that the storyboard isn't empty and already contains a few user interface components. We'll use some of them, but we don't need the image view. Select the image view and remove it by pressing Delete.

Open the Object Library and add a text view below the navigation bar. Change its frame to {x: 8, y: 72, width: 304, height: 300}. Finally, double-click the navigation bar's title view and set the title to "Text reader".

Step 6: Implement ActionViewController

It's time to implement the Action extension. In the Project Navigator, select ActionViewController.m and make the following changes.

Below the import statements add an import statement for the AVFoundation framework so we can leverage the speech synthesis API in the Action extension.

In the class extension of the ActionViewController class, remove the imageView outlet and add one for the text view we added earlier.

We also need to make some changes to the viewDidLoad method of the ActionViewController class.

The implementation is pretty easy to understand. In viewDidLoad, we obtain the input text, assign it to the text view, and create a speech synthesizer object that will read it.

Step 7: Configure Action Extension

Even though we are getting close, there are a few things that we still need to take care of. First, we need to connect the text view in the storyboard to the outlet we created a moment ago.

Open MainInterface.storyboard and connect the text view to the Image scene as we did in Main.storyboard a minute ago.

We also need to specify which data types the Action extension supports. In our case, it's only text. Expand the Supporting Files group and select Info.plist. In Info.plist, navigate to NSExtension > NSExtensionAttributes > NSExtensionActivationRule. Change the NSExtensionActivationRule's type from String to Dictionary.

With the dictionary expanded, click the + button next to it. This will add a child key. Set its name to NSExtensionActivationSupportsText, its type to Boolean, and the value to YES. This ensures that the action extension is only visible when the input items contains text.

Still in the Info.plist, change the Bundle Display Name to Read It. It looks better. This is what the related part of the Info.plist file should look:

Activation rule in Infoplist

Step 8

As a finishing touch, you can add an icon for the Action extension. In the Project Navigator, select the project and, under targets, select the ReadItAction target. From the General tab in the App Icons and Launch Images section, tap Use Asset Catalog next to the App Icons Source. In the prompt, click Migrate. Navigate to the asset catalog and drag the below icon to the iPhone App iOS 7,8 60pt 2x place.

Action extension icon

Build and run the app to see if everything works as expected. There is one thing though. If the sound icon is not shown on the Action extension, you need to make sure that the main Images.xcassets file is being copied to the extension target.

To do that, select the project in Project Navigator and choose the ReadItAction target from the list of Targets. Open the Build Phases tab at the top and expand Copy Bundle Resources phase. If the Images.xcassets file is not in the list of resource, then click the little plus symbol to add it to the list.

3. Run and Test

Run the app to try it out. Below are two screenshots that show the extension in action. You can also try invoking this activity view controller from the Notes app and let our extension read some of your notes. Also, try opening the activity sheet in the Photos app, you'll see that our extension is not listed, which is exactly what we'd expect based on the activation rules we set.

Our final sample Action extension

Conclusion

In this tutorial, you learned how to build a simple Action extension. We also covered the basics of using the speech synthesis API of the AVFoundation framework. If you're interested in creating other extensions, then check out some other tutorials on Tuts+, such as Cesar Tessarin's tutorial on creating a Today extension.

If you have any comments or questions, you can leave a comment in the comments below or contact me on Twitter.

2015-01-26T16:30:42.000Z2015-01-26T16:30:42.000ZLukas Petr

iOS 8: How to Build a Simple Action Extension

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-22794

App extensions were introduced during WWDC 14 as a way to extend the functionality of iOS apps to other parts of the system and to allow for better inter-app communication.

To name a few, you can use a Today extension to create a widget that will appear in Notification Center, a Sharing extension that will let the user share to their social network, or an Action extension that lets the user act on current content—either view it in a different way or change it. In this hands-on tutorial, we will build an Action extension from scratch.

Even though the tutorial doesn't require any additional knowledge, I do recommend that you take a look at a few resources if you'd like to learn more about extensions after reading this tutorial.

1. What Are We Going to Build?

We are going to build a simple Action extension called "Read it". The extension will accept text as input and will read the text using the speech synthesis API of the AVFoundation framework. I think this works well for the tutorial, because we are not introducing any third party dependencies or other difficulties.

This is what the extension will look like when it's finished. You can download the result of this tutorial from GitHub.

Our Action extension inside activity view controller

2. Creating an Action Extension

Step 1: Project Setup

Start by launching Xcode 6.1 or higher, and create a new project. Select New > Project... from Xcode's File menu and choose Single View Application from the list of templates.

Click Next and give your project a name of SampleActionExtensionApp. Enter an Organization Identifier and set Devices to iPhone. The language we'll be using for this tutorial is Objective-C.

Step 2: Add Target

Once you've created the project, you can add a target for the Action extension. Select New > Target... from the File menu. In the left pane, select Application Extension from the iOS section, choose Action extension, and click Next.

Set the Product Name to ReadItAction. Also note the other options, specifically the Action Type. I will get to that one in a minute. Click Finish to create the Action extension.

You will now be asked if you want to activate the ReadItAction scheme. Click Cancel, because we will be installing the Action extension by running the containing app instead.

Action Types

There are two types of Action extensions, one with a user interface and one without one. You may be wondering what the benefit is of having an Action extension without a user interface so let me explain.

Action extensions without a user interface act on the current item in a way that changes it. For example, an Action extension could remove red eyes from photos and it doesn't need a user interface to do that. The containing app then has a chance to use the changed content, the enhanced photo in this case.

Action extensions with a user interface can be either full screen or presented as a form sheet. The template Action extension target uses the full screen presentation so that's what we are going to use.

Step 3: Implement User Interface

Now that we have the basics set up, we can start creating the user interface. We'll start with the containing app.

Click the Main.storyboard in the SampleActionExtensionApp group in the Project NavigatorIn the right pane, select the File Inspector and uncheck Use size classes. Note that if you were creating a real app and you'd need to support iPad, it would probably be a good idea to use size classes.

Open the Object Library and drag a text view and a toolbar onto the view. Set the text view's frame to {x:8, y:20, width:304, height:288} in the Size Inspector on the right. As for the toolbar, set its frame to {x:0, y:308, width:320, height:44} in the Size Inspector.

The toolbar contains one bar button. Select it, and, in the Attributes Inspector, set its Style to Plain and its Identifier to Action.

As a final touch, remove the default text of the text view and replace it with "Tap the action button to invoke activity view controller. Then select 'Read it' action and this text will be read by our sample Action extension."

The user interface of the view controller should now should look something like this:

User interface of containing application

Of course, we could have left the containing app blank. After all, we are building a sample app extension so the app doesn't really have to do anything. But I wanted to show how easy it is to invoke the activity controller from inside your app and provide a point where other Action extensions can come in.

When the button in the toolbar is tapped, an activity view controller is presented and we will be able to invoke our Action extension from within there. Another good reason is that if you wanted to publish your extension on the App Store, it has to be part of a real app and obviously the app has to do something in order for it to be approved.

Step 4: Present Activity View Controller

Next, we need to add some code to ViewController.m. Start by creating an outlet for the text view in the view controller's class extension as shown below.

Create an action named actionButtonPressed in which we initialize and present an UIActivityViewController instance and present it to the user.

Head back to Main.storyboard and connect the text view outlet to the text view by pressing Control and dragging from the View Controller object in the View Controller Scene to the text view, selecting textView from the popover menu.

To connect the action method, select the bar button in the toolbar and open the Connections Inspector. Drag from selector, under Sent actions, to the View Controller object, selecting actionButtonPressed: from the popover menu.

With the app's user interface ready and wired up, we can move on to building the Action extension.

Step 5: Implement User Interface

In the Project Navigator, expand the ReadItAction group and click on MainInterface.storyboard. You'll notice that the storyboard isn't empty and already contains a few user interface components. We'll use some of them, but we don't need the image view. Select the image view and remove it by pressing Delete.

Open the Object Library and add a text view below the navigation bar. Change its frame to {x: 8, y: 72, width: 304, height: 300}. Finally, double-click the navigation bar's title view and set the title to "Text reader".

Step 6: Implement ActionViewController

It's time to implement the Action extension. In the Project Navigator, select ActionViewController.m and make the following changes.

Below the import statements add an import statement for the AVFoundation framework so we can leverage the speech synthesis API in the Action extension.

In the class extension of the ActionViewController class, remove the imageView outlet and add one for the text view we added earlier.

We also need to make some changes to the viewDidLoad method of the ActionViewController class.

The implementation is pretty easy to understand. In viewDidLoad, we obtain the input text, assign it to the text view, and create a speech synthesizer object that will read it.

Step 7: Configure Action Extension

Even though we are getting close, there are a few things that we still need to take care of. First, we need to connect the text view in the storyboard to the outlet we created a moment ago.

Open MainInterface.storyboard and connect the text view to the Image scene as we did in Main.storyboard a minute ago.

We also need to specify which data types the Action extension supports. In our case, it's only text. Expand the Supporting Files group and select Info.plist. In Info.plist, navigate to NSExtension > NSExtensionAttributes > NSExtensionActivationRule. Change the NSExtensionActivationRule's type from String to Dictionary.

With the dictionary expanded, click the + button next to it. This will add a child key. Set its name to NSExtensionActivationSupportsText, its type to Boolean, and the value to YES. This ensures that the action extension is only visible when the input items contains text.

Still in the Info.plist, change the Bundle Display Name to Read It. It looks better. This is what the related part of the Info.plist file should look:

Activation rule in Infoplist

Step 8

As a finishing touch, you can add an icon for the Action extension. In the Project Navigator, select the project and, under targets, select the ReadItAction target. From the General tab in the App Icons and Launch Images section, tap Use Asset Catalog next to the App Icons Source. In the prompt, click Migrate. Navigate to the asset catalog and drag the below icon to the iPhone App iOS 7,8 60pt 2x place.

Action extension icon

Build and run the app to see if everything works as expected. There is one thing though. If the sound icon is not shown on the Action extension, you need to make sure that the main Images.xcassets file is being copied to the extension target.

To do that, select the project in Project Navigator and choose the ReadItAction target from the list of Targets. Open the Build Phases tab at the top and expand Copy Bundle Resources phase. If the Images.xcassets file is not in the list of resource, then click the little plus symbol to add it to the list.

3. Run and Test

Run the app to try it out. Below are two screenshots that show the extension in action. You can also try invoking this activity view controller from the Notes app and let our extension read some of your notes. Also, try opening the activity sheet in the Photos app, you'll see that our extension is not listed, which is exactly what we'd expect based on the activation rules we set.

Our final sample Action extension

Conclusion

In this tutorial, you learned how to build a simple Action extension. We also covered the basics of using the speech synthesis API of the AVFoundation framework. If you're interested in creating other extensions, then check out some other tutorials on Tuts+, such as Cesar Tessarin's tutorial on creating a Today extension.

If you have any comments or questions, you can leave a comment in the comments below or contact me on Twitter.

2015-01-26T16:30:42.000Z2015-01-26T16:30:42.000ZLukas Petr

How To Write a Product Feature Set

$
0
0

One of the key benefits of a product feature set is that it helps communicate your product vision with others, such as your team or investors. In this article, I'll teach you how to structure your product feature set and what should be covered in such a document. Along the way, I'll try to convince you of the value of writing a product feature set.

When you start building a product, you have a vision of what you want to achieve. Through a product feature set, you’re forced to make your vision as specific as possible.

1. What & Why?

What is a product feature set?

A feature set can best be summarized as a written document that lists the specifications of a product. It includes the list of features that together makes a product. On top of that, you cover your design vision as well as what technologies will be used to build the product.

Why would you need this?

A product feature set is first and foremost used to facilitate the communication about your product vision. These are a few typical use cases:

  • For Yourself: The main purpose is to have a reference document you can rely on and refer back to. It forces you to be specific about what you want to create.
  • For Your Team: For teams, it makes an even stronger use case. Getting everyone on the same page about a product during it's development isn’t easy. While other processes, such as user stories, do a great job at describing the nitty-gritty details of a feature and their technical implementation, a feature set is useful to get everyone on the same page of the grand vision of a product. Typically, one person takes ownership of this living document. Usually, that's the product owner. This creates a status quo of what the product is and makes internal discussions with your team easier. In these scenarios, a product feature set is often called a product requirements document.
  • For Investors: You have an idea for a product and you’re trying to raise money. A single document covering the product in great detail helps investors understand what your product entails.
  • For Your Client: If you work as a freelancer or in a service company, such as an agency, the quality of your communication is often what separates the best from the great. Presenting a feature set to your client before entering design and development of a product assures everyone is on the same page.
A product feature set is a low-cost, highly valuable document that makes communication easier. It sets the tone for a product before it enters development.

2. Requirements of a Product Feature Set

There isn't a standard for a product feature set. I’ve found the following structure works best for me. It covers a variety of topics that define the direction of the product:

  • Introduction
    • Summary or Pitch
  • Vision
    • Product Vision
    • Design Vision
    • Business Vision
  • Product
    • Information Architecture
    • Technical Architecture
    • Features
    • Product Roadmap

Thinking about each of these individual elements will help you understand your product better. It will make the communication about different parts of said product easier.

3. Let's Start Writing

Enough with the theory, let's write a product feature set. In this article, I provide recommendations for how to write a product feature set. However, feel free to tweak it to your needs and, more importantly, your audience. If the product feature set is solely for investors, then the structure and wording can be different than when you're writing it for yourself.

Introduction

In the first section of a feature set, you start writing the summary of your product. In the summary, you set the stage for the rest of the document. This should be short and sweet. Try to stick within two to three paragraphs. If someone would glance over the summary, they should know what the value proposition of your product is.

Let’s say you're writing a summary for Snapchat, in the early days of the product. The summary could be something like this:

With Snapchat, users are able to send private messages to each other. The private messages are in the form of photos taken by the user in real time. The user can select the amount of time the receiver is able to view the photo.

Through our product, we would like to bring back privacy to conversations, both between friends and strangers. The target audience mainly consists of men and women, between the age of 16 and 30.

By using private photos as the main method of communication, we expect short bursts of product usage. The user generated content will be of lower quality as it is aimed at a single person. This goes against the current status quo of the industry, which is curating content for a large audience, such as Instagram or Facebook.

Vision

In the vision section, we focus on the bigger picture of different aspects of your product.

Product Vision

In the product vision section, you have the opportunity to explain the bigger picture of your product. The best products all originate from an MVP or minimum viable product. If you're not sure what a minimum viable product is or you’re wondering how you can scope one yourself, you can read this article.

The product summary should explain your MVP. In the product vision, you describe your grand vision, what is the ultimate goal of the product?

Compare product development to climbing a mountain. Your MVP—the product summary—is your first stop on the mountain during the climb while the product vision is the summit.

Let's take another example, Facebook. Their product vision could have sounded like this:

Initially we want to focus on connecting college students at local colleges. Ultimately, we want to empower all people to connect with their friends, family, and strangers.

Design Vision

If you're a designer, I’m certain that you have a direction in mind for the user experience as well as the user interface. You might have a design style in mind, for example Material Design, or you can mention a number of products you really enjoy in terms of their user experience. This is what you cover in the design vision of your feature set.

Material Design is a possible vision for the user interface of your product.

For someone who's a lot less familiar with design this might be a difficult section to write. If you can’t think of much more than "clean, easy to use", then I'd recommend to not include this section in your feature set.

A product's design is important. If it's not part of your skill set, then I strongly recommend seeking advice from a product designer.

Business Vision

Of course, the business model of the product is covered as well. There are plenty of monetization routes available to you, ranging from freemium and advertising to a subscription-based model. This is an important and broad topic that requires a separate article.

In this section, you describe how you plan to get a return of investment from the product and how you would define it as "successful". Depending on the product's goals, it might not even be revenue, but creating impact in a marketing campaign for example.

Remember that most products rarely generate revenue from day one. In fact, most products require significant of traction before they break even. This is especially true for freemium and advertising focused products.

Monetization for each product is different. Sometimes it's better to focus on user traction and sometimes it's better to focus on the monetary aspect of your product. It's a decision you have to make.

Product

Awesome. We’ve covered the 10,000 feet view of your goals. It's now time to get to the nitty-gritty. In the product section, you describe more granular what the plan is. You define all the moving pieces of the product.

Information Architecture

First of all, the information architecture of the product needs to be defined. By defining the Information architecture, you structure a product to support usability and discoverability.

In your feature set, the goal is to list the different flows of the product. This provides a good understanding of how big—or small—the product is. It helps people understand what features the product contains. It also answers the question how a user navigates through your product.

The following outline is an example of an information architecture for a simple dating app:

  • User Entrance
    • Registration
    • Log In
    • Forget Password
  • Profile
    • View Your Profile
    • Edit Your Profile
    • Search Profiles
  • Connect
    • Like
    • Message

A great exercise is to try and map a large, existing product. For example, if you do this exercise for Facebook, you will realize that there are a lot of moving parts (events, groups, pages, advertising, …).

Technical Architecture

If you have a technical background, providing some high-level technical notes is recommended.

Personally, in the technical architecture I like to list APIs I plan on using, describe functionality of the backend, and describe possible technical challenges for the product.

I'm not a developer myself so my goal by defining the technical architecture is to be able to start a discussion with a team of developers.

The goal is not to make final technical decisions, but rather have a conversation about the underlying technology and how it affects the product.

Here's an example of a technical architecture from a feature set:

List of APIs:
  • Payment Transaction (PayPal)
  • Social Media (Facebook, Twitter, Foursquare)
  • Backend Communication (AFNetworking)
  • Push Notifications (ZeroPush)
  • Custom Tab Bar (RDVTabBarController)
  • In-House Tools (Ethanol)

Features

The features section is the most important section of your feature set. In this section, you describe the features of the product in greater detail.

You might wonder how much detail you should add? When a designer is able to design the user interface of the product based on your feature set, then there's enough detail.

In essence, describing a feature means describing the different elements to make that feature work. What logic is required on the backend? What elements does the user interface need to have? How can I navigate between different flows? These are some questions you can ask yourself when writing the product's features.

Here's an example of a home feed that lists event invitations:

The home feed consists of a list of events. Each list item contains a title and a date. The list is sorted by date, the most recent events are displayed first. The list only shows current and future events, events in the past are no longer visible. Events the user has created have a visual indicator. Should a user not have any invitations in their home feed, then they will see an illustration plus copywriting with a call to action to create an event.

The home feed has a top navigation bar. In the top navigation bar, the user can navigate to their settings or create an event. The user can tap on an event to see the event detail screen.

Product Roadmap

The goal of your feature set is to focus on the minimum viable product as I described in the beginning of this article. As for most products, there's a grand vision of what you want to achieve and how you see your product grow in terms of features. This is covered in the product roadmap.

In the final section of the product feature set, you cover the future of your product.

What features would you possibly want to develop in a version 1.1? Version 1.5? What about 2.0?

What's important is that in this section you merely scratch the surface. As your product gains traction, you get insights into how people use your product. This information typically affects your product vision. Your product might get used in different ways than you imagined.

Here's a brief example of how a product roadmap could look for an MVP of a fitness product:

1. Connect with Others

One of the possible next routes for the product would be starting the integration of being able to connect with others. We would build a social layer on top of the user profiles.

Possible other features are an activity feed, the ability to friend other users, finding a personal trainer, and the ability to message other users.

2. Web Integration & Profile Sharing
Profiles could become accessible online, much in the style of how Instagram approaches their web presence.

Conclusion

That's it. In this article, we've covered how you write a product feature set. Now it's your turn. The only way to truly learn how to write a feature set is by actually writing one.

If you don't have a product you're working on at the moment, I recommend to write the feature set of an existing product. It's a good exercise.

Questions? Let me know in the comments or on Twitter.

Resources

2015-01-28T16:45:44.000Z2015-01-28T16:45:44.000ZSven Lenaerts

How To Write a Product Feature Set

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-22968

One of the key benefits of a product feature set is that it helps communicate your product vision with others, such as your team or investors. In this article, I'll teach you how to structure your product feature set and what should be covered in such a document. Along the way, I'll try to convince you of the value of writing a product feature set.

When you start building a product, you have a vision of what you want to achieve. Through a product feature set, you’re forced to make your vision as specific as possible.

1. What & Why?

What is a product feature set?

A feature set can best be summarized as a written document that lists the specifications of a product. It includes the list of features that together makes a product. On top of that, you cover your design vision as well as what technologies will be used to build the product.

Why would you need this?

A product feature set is first and foremost used to facilitate the communication about your product vision. These are a few typical use cases:

  • For Yourself: The main purpose is to have a reference document you can rely on and refer back to. It forces you to be specific about what you want to create.
  • For Your Team: For teams, it makes an even stronger use case. Getting everyone on the same page about a product during it's development isn’t easy. While other processes, such as user stories, do a great job at describing the nitty-gritty details of a feature and their technical implementation, a feature set is useful to get everyone on the same page of the grand vision of a product. Typically, one person takes ownership of this living document. Usually, that's the product owner. This creates a status quo of what the product is and makes internal discussions with your team easier. In these scenarios, a product feature set is often called a product requirements document.
  • For Investors: You have an idea for a product and you’re trying to raise money. A single document covering the product in great detail helps investors understand what your product entails.
  • For Your Client: If you work as a freelancer or in a service company, such as an agency, the quality of your communication is often what separates the best from the great. Presenting a feature set to your client before entering design and development of a product assures everyone is on the same page.
A product feature set is a low-cost, highly valuable document that makes communication easier. It sets the tone for a product before it enters development.

2. Requirements of a Product Feature Set

There isn't a standard for a product feature set. I’ve found the following structure works best for me. It covers a variety of topics that define the direction of the product:

  • Introduction
    • Summary or Pitch
  • Vision
    • Product Vision
    • Design Vision
    • Business Vision
  • Product
    • Information Architecture
    • Technical Architecture
    • Features
    • Product Roadmap

Thinking about each of these individual elements will help you understand your product better. It will make the communication about different parts of said product easier.

3. Let's Start Writing

Enough with the theory, let's write a product feature set. In this article, I provide recommendations for how to write a product feature set. However, feel free to tweak it to your needs and, more importantly, your audience. If the product feature set is solely for investors, then the structure and wording can be different than when you're writing it for yourself.

Introduction

In the first section of a feature set, you start writing the summary of your product. In the summary, you set the stage for the rest of the document. This should be short and sweet. Try to stick within two to three paragraphs. If someone would glance over the summary, they should know what the value proposition of your product is.

Let’s say you're writing a summary for Snapchat, in the early days of the product. The summary could be something like this:

With Snapchat, users are able to send private messages to each other. The private messages are in the form of photos taken by the user in real time. The user can select the amount of time the receiver is able to view the photo.

Through our product, we would like to bring back privacy to conversations, both between friends and strangers. The target audience mainly consists of men and women, between the age of 16 and 30.

By using private photos as the main method of communication, we expect short bursts of product usage. The user generated content will be of lower quality as it is aimed at a single person. This goes against the current status quo of the industry, which is curating content for a large audience, such as Instagram or Facebook.

Vision

In the vision section, we focus on the bigger picture of different aspects of your product.

Product Vision

In the product vision section, you have the opportunity to explain the bigger picture of your product. The best products all originate from an MVP or minimum viable product. If you're not sure what a minimum viable product is or you’re wondering how you can scope one yourself, you can read this article.

The product summary should explain your MVP. In the product vision, you describe your grand vision, what is the ultimate goal of the product?

Compare product development to climbing a mountain. Your MVP—the product summary—is your first stop on the mountain during the climb while the product vision is the summit.

Let's take another example, Facebook. Their product vision could have sounded like this:

Initially we want to focus on connecting college students at local colleges. Ultimately, we want to empower all people to connect with their friends, family, and strangers.

Design Vision

If you're a designer, I’m certain that you have a direction in mind for the user experience as well as the user interface. You might have a design style in mind, for example Material Design, or you can mention a number of products you really enjoy in terms of their user experience. This is what you cover in the design vision of your feature set.

Material Design is a possible vision for the user interface of your product.

For someone who's a lot less familiar with design this might be a difficult section to write. If you can’t think of much more than "clean, easy to use", then I'd recommend to not include this section in your feature set.

A product's design is important. If it's not part of your skill set, then I strongly recommend seeking advice from a product designer.

Business Vision

Of course, the business model of the product is covered as well. There are plenty of monetization routes available to you, ranging from freemium and advertising to a subscription-based model. This is an important and broad topic that requires a separate article.

In this section, you describe how you plan to get a return of investment from the product and how you would define it as "successful". Depending on the product's goals, it might not even be revenue, but creating impact in a marketing campaign for example.

Remember that most products rarely generate revenue from day one. In fact, most products require significant of traction before they break even. This is especially true for freemium and advertising focused products.

Monetization for each product is different. Sometimes it's better to focus on user traction and sometimes it's better to focus on the monetary aspect of your product. It's a decision you have to make.

Product

Awesome. We’ve covered the 10,000 feet view of your goals. It's now time to get to the nitty-gritty. In the product section, you describe more granular what the plan is. You define all the moving pieces of the product.

Information Architecture

First of all, the information architecture of the product needs to be defined. By defining the Information architecture, you structure a product to support usability and discoverability.

In your feature set, the goal is to list the different flows of the product. This provides a good understanding of how big—or small—the product is. It helps people understand what features the product contains. It also answers the question how a user navigates through your product.

The following outline is an example of an information architecture for a simple dating app:

  • User Entrance
    • Registration
    • Log In
    • Forget Password
  • Profile
    • View Your Profile
    • Edit Your Profile
    • Search Profiles
  • Connect
    • Like
    • Message

A great exercise is to try and map a large, existing product. For example, if you do this exercise for Facebook, you will realize that there are a lot of moving parts (events, groups, pages, advertising, …).

Technical Architecture

If you have a technical background, providing some high-level technical notes is recommended.

Personally, in the technical architecture I like to list APIs I plan on using, describe functionality of the backend, and describe possible technical challenges for the product.

I'm not a developer myself so my goal by defining the technical architecture is to be able to start a discussion with a team of developers.

The goal is not to make final technical decisions, but rather have a conversation about the underlying technology and how it affects the product.

Here's an example of a technical architecture from a feature set:

List of APIs:
  • Payment Transaction (PayPal)
  • Social Media (Facebook, Twitter, Foursquare)
  • Backend Communication (AFNetworking)
  • Push Notifications (ZeroPush)
  • Custom Tab Bar (RDVTabBarController)
  • In-House Tools (Ethanol)

Features

The features section is the most important section of your feature set. In this section, you describe the features of the product in greater detail.

You might wonder how much detail you should add? When a designer is able to design the user interface of the product based on your feature set, then there's enough detail.

In essence, describing a feature means describing the different elements to make that feature work. What logic is required on the backend? What elements does the user interface need to have? How can I navigate between different flows? These are some questions you can ask yourself when writing the product's features.

Here's an example of a home feed that lists event invitations:

The home feed consists of a list of events. Each list item contains a title and a date. The list is sorted by date, the most recent events are displayed first. The list only shows current and future events, events in the past are no longer visible. Events the user has created have a visual indicator. Should a user not have any invitations in their home feed, then they will see an illustration plus copywriting with a call to action to create an event.

The home feed has a top navigation bar. In the top navigation bar, the user can navigate to their settings or create an event. The user can tap on an event to see the event detail screen.

Product Roadmap

The goal of your feature set is to focus on the minimum viable product as I described in the beginning of this article. As for most products, there's a grand vision of what you want to achieve and how you see your product grow in terms of features. This is covered in the product roadmap.

In the final section of the product feature set, you cover the future of your product.

What features would you possibly want to develop in a version 1.1? Version 1.5? What about 2.0?

What's important is that in this section you merely scratch the surface. As your product gains traction, you get insights into how people use your product. This information typically affects your product vision. Your product might get used in different ways than you imagined.

Here's a brief example of how a product roadmap could look for an MVP of a fitness product:

1. Connect with Others

One of the possible next routes for the product would be starting the integration of being able to connect with others. We would build a social layer on top of the user profiles.

Possible other features are an activity feed, the ability to friend other users, finding a personal trainer, and the ability to message other users.

2. Web Integration & Profile Sharing
Profiles could become accessible online, much in the style of how Instagram approaches their web presence.

Conclusion

That's it. In this article, we've covered how you write a product feature set. Now it's your turn. The only way to truly learn how to write a feature set is by actually writing one.

If you don't have a product you're working on at the moment, I recommend to write the feature set of an existing product. It's a good exercise.

Questions? Let me know in the comments or on Twitter.

Resources

2015-01-28T16:45:44.000Z2015-01-28T16:45:44.000ZSven Lenaerts

Swift from Scratch: An Introduction to Classes and Structures

$
0
0

In the previous articles of this series, we covered the basics of the Swift programming language. If you followed along, you should now have a solid understanding of variables, constants, functions, and closures. It's now time to use what we've learned so far and apply that knowledge to the object-oriented concepts available in Swift.

To understand the concepts discussed in this tutorial, it's important that you have a basic understanding of object-oriented programming. If you're not familiar with classes, objects, and methods, then I recommend you first read up on these topics before continuing with this article.

1. Introduction

In this article, we're going to explore the fundamental building blocks of object-oriented programming in Swift, classes and structures. In Swift, classes and structures feel and behave very similar, but there are a number of key differences that you need to understand to avoid common pitfalls.

In Objective-C, classes and structures are very different. This isn't true for Swift. In Swift, for example, both classes and structures can have properties and methods. Unlike C structures, structures in Swift can be extended and conform to protocols.

The question is "What is the difference between classes and structures?" We'll revisit this question later in this article. Let's first explore what a class looks like in Swift.

2. Terminology

Before we start working with classes and structures, I'd like to clarify a few commonly used terms in object-oriented programming. The terms classes, objects, and instances often confuse people that are new to object-oriented programming and it's therefore important that you know how Swift uses these terms.

Objects and Instances

A class is a blueprint or template for an instance of that class. The term object is often used to refer to an instance of a class. In Swift, however, classes and structures are very similar and it's therefore easier and less confusing to use the term instance for both classes and structures.

Methods and Functions

Earlier in this series, we worked with functions. In the context of classes and structures, we usually refer to functions as methods. In other words, methods are functions that belong to a particular class or structure. In the context of classes and structures, you can use both terms interchangeably since every method is a function.

3. Defining a Class

Let's get our feet wet by defining a class. Fire up Xcode and create a new playground. Remove the contents of the playground and add the following class definition.

The class keyword indicates that we're defining a class named Person. The implementation of the class is wrapped in a pair of curly braces. Even though the Person class isn't very useful in its current form, it is a proper, functional Swift class.

Properties

As in most other object-oriented programming languages, a class can have properties and methods. In the updated example below, we define three properties:

  • firstName, a variable property of type String?
  • lastName, a variable property of type String?
  • gender: a constant property of type String

As the example illustrates, defining properties in a class definition is very similar to defining regular variables and constants. We use the var keyword to define a variable property and the let keyword to define a constant property.

The above properties are also known as stored properties. Later in this series, we'll learn about computed properties. As the name implies, stored properties are properties that are stored by the class instance. They are very similar to properties in Objective-C.

It's important to note that every stored property needs to have a value after initialization or be defined as an optional type. In the above example, we give the gender property an initial value of "female". This tells Swift that the gender property is of type String. Later in this article, we'll take a look at initialization in more detail and explore how it ties in with initializing properties.

Even though we defined the gender property as a constant, it is possible to change its value during the initialization of a Person instance. Once the instance has been initialized, the gender property can no longer be modified since we defined the property as a constant property with the let keyword. This will become clearer later in this article when we discuss initialization.

Methods

We can add behavior or functionality to a class through functions or methods. In many programming languages, the term method is used instead of function in the context of classes and instances. Defining a method is almost identical to defining a function. In the next example, we define the fullName method in the Person class.

The method fullName is nested in the class definition. It accepts no parameters and returns a String. The implementation of the fullName method is straightforward. Through optional binding, which we discussed earlier in this series, we access the values stored in the firstName and lastName properties.

We store the first and last name of the Person instance in an array and join the parts with a space. The reason for this somewhat awkward implementation should be obvious, the first and last name can be blank, which is why both properties are of type String?.

Instantiation

We've defined a class with a few properties and a method. How do we create an instance of the Person class? If you're familiar with Objective-C, then you're going to love the conciseness of the following snippet.

Instantiating an instance of a class is very similar to invoking a function. To create an instance, the name of the class is followed by a pair of parentheses, the return value is assigned to a constant or variable.

In our example, the constant john now points to an instance of the Person class. Does this mean that we can't change any of its properties? The next example answers this question.

We can access the properties of an instance using the convenience of the dot syntax. In the example, we set firstName to "John", lastName to "Doe", and gender to "male". Before we draw any conclusions based on the above example, we need to check for any errors in the playground.

Setting firstName and lastName doesn't seem to cause any problems. Assigning "male" to the gender property, however, results in an error. The explanation is simple. Even though john is declared as a constant, that doesn't prevent us from modifying the Person instance. There's one caveat though, only variable properties can be modified after initializing an instance. Properties that are defined as constant cannot be modified after initialization.

Note that the I emphasized after in the previous sentence. A constant property can be modified during initialization an instance. While the gender property shouldn't be changed once a Person instance has been created, the class wouldn't be very useful if we could only instantiate female Person instances. Let's make the Person class a bit more flexible.

Initialization

Initialization is a step in the lifetime of an instance of a class or structure. During initialization, we prepare the instance for use by populating its properties with initial values. The initialization of an instance can be customized by implementing an initializer, a special type of method. Let's add an initializer to the Person class.

Note that the name of the initializer, init, isn't preceded by the func keyword. In contrast to initializers in Objective-C, an initializer in Swift doesn't return the instance that's being initialized.

Another important detail is how we set the gender property with an initial value. Even though the gender property is defined as a constant property, we can set its value in the initializer. We set the gender property by using the property name, but it's also fine to be more explicit and write the following:

In the above example, self refers to the instance that's being initialized. This means that self.gender refers to the gender property of the instance. We can omit self, as in the first example, because there's no confusion what property we are referring to. This isn't always the case though. Let me explain what I mean.

Parameters

In many situations, you want to pass initial values to the initializer to customize the instance you're instantiating. This is possible by creating a custom initializer that accepts one or more arguments. In the following example we create a custom initializer that accepts one argument, gender, of type String.

There are two things to note. First, we are required to access the gender property through self.gender to avoid ambiguity since the local paramater name is equal to gender. Second, even though we haven't specified an external parameter name, Swift by default creates an external parameter name that is equal to the local parameter name. The result is the same as if we were to prefix the gender parameter with a # symbol.

In the following example, we instantiate another Person instance by invoking the custom initializer we just defined.

Even though the initial value of the gender property is set to "female" in the class definition, by passing a value for the gender parameter we can assign a custom value to the constant gender property during initialization.

Multiple Initializers

As in Objective-C, a class or structure can have multiple initializers. In the following example, we create two Person instances. In the first line, we use the default initializer. In the second line, we use the custom initializer we defined earlier.

4. Defining a Structure

Structures are surprisingly similar to classes, but there are a few key differences. Let's start by defining a basic structure.

At first glance, the only difference is the use of the struct keyword instead of the class keyword. The example also shows us an alternative approach to supply initial values to properties. Instead of setting an initial value for each property, we can give properties an initial value in the initializer of the structure. Swift won't throw an error, because it also inspects the initializer to determine the initial value—and type—of each property.

5. Classes and Structures

You may start to wonder what the difference is between classes and structures. At first glance, they look identical in form and function, with the exception of the class and struct keywords. There are a number of key differences though.

Inheritance

Classes support inheritance whereas structures don't. The following example illustrates this. The inheritance design pattern is indispensable in object-oriented programming and, in Swift, it's a key difference between classes and structures.

In the above example, the Person class is the parent or superclass of the Student class. This means that the Student class inherits the properties and behavior of the Person class. The last line illustrates this. We initialize a Student instance by invoking the custom initializer we defined earlier in the Person class.

Copying and Referencing

The following concept is probably the most important concept in Swift you'll learn today, the difference between value and reference types. Structures are value types, which means that they are passed by value. An example illustrates this concept best.

We define a structure, Point, to encapsulate the data to store a coordinate in a two-dimensional space. We instantiate point1 with x equal to 0 and y equal to 0. We assign point1 to point2 and set the x coordinate of point1 to 10. If we output the x coordinate of both points, we discover that they are not equal.

Structures are passed by value while classes are passed by reference. If you plan to continue working with Swift, you need to understand the previous statement. When we assigned point1 to point2, Swift created a copy of point1 and assigned it to point2. In other words, point1 and point2 each point to a different instance of the Point structure.

Let's now repeat this exercise with the Person class. In the following example, we instantiate a Person instance, set its properties, assign person1 to person2, and update the firstName property of person1. To see what passing by reference means for classes, we output the value of the firstName property of both Person instances.

The example proves that classes are reference types. This means that person1 and person2 refer to or reference the same Person instance. By assigning person1 to person2, Swift doesn't create a copy of person1. The person2 variable points to the same Person instance person1 is pointing to. Changing the firstName property of person1 also affects the firstName property of person2, because they are referencing the same Person instance.

As I mentioned several times in this article, classes and structures are very similar. What separates classes and structures is very important. If the above concepts aren't clear, then I encourage you to read the article one more time to let the concepts we covered sink in.

Learn More in Our Swift Programming Course

If you're interested in taking your Swift education to the next level, you can take a look at our full course on Swift development.

Conclusion

In this installment of Swift from Scratch, we've started exploring the basics of object-oriented programming in Swift. Classes and structures are the fundamental building blocks of most Swift projects and we'll learn more about them in the next few lessons of this series.

In the next article, we continue our exploration of classes and structures by taking a closer look at properties and inheritance.

2015-01-30T18:15:42.000Z2015-01-30T18:15:42.000ZBart Jacobs

Swift from Scratch: An Introduction to Classes and Structures

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-23197

In the previous articles of this series, we covered the basics of the Swift programming language. If you followed along, you should now have a solid understanding of variables, constants, functions, and closures. It's now time to use what we've learned so far and apply that knowledge to the object-oriented concepts available in Swift.

To understand the concepts discussed in this tutorial, it's important that you have a basic understanding of object-oriented programming. If you're not familiar with classes, objects, and methods, then I recommend you first read up on these topics before continuing with this article.

1. Introduction

In this article, we're going to explore the fundamental building blocks of object-oriented programming in Swift, classes and structures. In Swift, classes and structures feel and behave very similar, but there are a number of key differences that you need to understand to avoid common pitfalls.

In Objective-C, classes and structures are very different. This isn't true for Swift. In Swift, for example, both classes and structures can have properties and methods. Unlike C structures, structures in Swift can be extended and conform to protocols.

The question is "What is the difference between classes and structures?" We'll revisit this question later in this article. Let's first explore what a class looks like in Swift.

2. Terminology

Before we start working with classes and structures, I'd like to clarify a few commonly used terms in object-oriented programming. The terms classes, objects, and instances often confuse people that are new to object-oriented programming and it's therefore important that you know how Swift uses these terms.

Objects and Instances

A class is a blueprint or template for an instance of that class. The term object is often used to refer to an instance of a class. In Swift, however, classes and structures are very similar and it's therefore easier and less confusing to use the term instance for both classes and structures.

Methods and Functions

Earlier in this series, we worked with functions. In the context of classes and structures, we usually refer to functions as methods. In other words, methods are functions that belong to a particular class or structure. In the context of classes and structures, you can use both terms interchangeably since every method is a function.

3. Defining a Class

Let's get our feet wet by defining a class. Fire up Xcode and create a new playground. Remove the contents of the playground and add the following class definition.

The class keyword indicates that we're defining a class named Person. The implementation of the class is wrapped in a pair of curly braces. Even though the Person class isn't very useful in its current form, it is a proper, functional Swift class.

Properties

As in most other object-oriented programming languages, a class can have properties and methods. In the updated example below, we define three properties:

  • firstName, a variable property of type String?
  • lastName, a variable property of type String?
  • gender: a constant property of type String

As the example illustrates, defining properties in a class definition is very similar to defining regular variables and constants. We use the var keyword to define a variable property and the let keyword to define a constant property.

The above properties are also known as stored properties. Later in this series, we'll learn about computed properties. As the name implies, stored properties are properties that are stored by the class instance. They are very similar to properties in Objective-C.

It's important to note that every stored property needs to have a value after initialization or be defined as an optional type. In the above example, we give the gender property an initial value of "female". This tells Swift that the gender property is of type String. Later in this article, we'll take a look at initialization in more detail and explore how it ties in with initializing properties.

Even though we defined the gender property as a constant, it is possible to change its value during the initialization of a Person instance. Once the instance has been initialized, the gender property can no longer be modified since we defined the property as a constant property with the let keyword. This will become clearer later in this article when we discuss initialization.

Methods

We can add behavior or functionality to a class through functions or methods. In many programming languages, the term method is used instead of function in the context of classes and instances. Defining a method is almost identical to defining a function. In the next example, we define the fullName method in the Person class.

The method fullName is nested in the class definition. It accepts no parameters and returns a String. The implementation of the fullName method is straightforward. Through optional binding, which we discussed earlier in this series, we access the values stored in the firstName and lastName properties.

We store the first and last name of the Person instance in an array and join the parts with a space. The reason for this somewhat awkward implementation should be obvious, the first and last name can be blank, which is why both properties are of type String?.

Instantiation

We've defined a class with a few properties and a method. How do we create an instance of the Person class? If you're familiar with Objective-C, then you're going to love the conciseness of the following snippet.

Instantiating an instance of a class is very similar to invoking a function. To create an instance, the name of the class is followed by a pair of parentheses, the return value is assigned to a constant or variable.

In our example, the constant john now points to an instance of the Person class. Does this mean that we can't change any of its properties? The next example answers this question.

We can access the properties of an instance using the convenience of the dot syntax. In the example, we set firstName to "John", lastName to "Doe", and gender to "male". Before we draw any conclusions based on the above example, we need to check for any errors in the playground.

Setting firstName and lastName doesn't seem to cause any problems. Assigning "male" to the gender property, however, results in an error. The explanation is simple. Even though john is declared as a constant, that doesn't prevent us from modifying the Person instance. There's one caveat though, only variable properties can be modified after initializing an instance. Properties that are defined as constant cannot be modified after initialization.

Note that the I emphasized after in the previous sentence. A constant property can be modified during initialization an instance. While the gender property shouldn't be changed once a Person instance has been created, the class wouldn't be very useful if we could only instantiate female Person instances. Let's make the Person class a bit more flexible.

Initialization

Initialization is a step in the lifetime of an instance of a class or structure. During initialization, we prepare the instance for use by populating its properties with initial values. The initialization of an instance can be customized by implementing an initializer, a special type of method. Let's add an initializer to the Person class.

Note that the name of the initializer, init, isn't preceded by the func keyword. In contrast to initializers in Objective-C, an initializer in Swift doesn't return the instance that's being initialized.

Another important detail is how we set the gender property with an initial value. Even though the gender property is defined as a constant property, we can set its value in the initializer. We set the gender property by using the property name, but it's also fine to be more explicit and write the following:

In the above example, self refers to the instance that's being initialized. This means that self.gender refers to the gender property of the instance. We can omit self, as in the first example, because there's no confusion what property we are referring to. This isn't always the case though. Let me explain what I mean.

Parameters

In many situations, you want to pass initial values to the initializer to customize the instance you're instantiating. This is possible by creating a custom initializer that accepts one or more arguments. In the following example we create a custom initializer that accepts one argument, gender, of type String.

There are two things to note. First, we are required to access the gender property through self.gender to avoid ambiguity since the local paramater name is equal to gender. Second, even though we haven't specified an external parameter name, Swift by default creates an external parameter name that is equal to the local parameter name. The result is the same as if we were to prefix the gender parameter with a # symbol.

In the following example, we instantiate another Person instance by invoking the custom initializer we just defined.

Even though the initial value of the gender property is set to "female" in the class definition, by passing a value for the gender parameter we can assign a custom value to the constant gender property during initialization.

Multiple Initializers

As in Objective-C, a class or structure can have multiple initializers. In the following example, we create two Person instances. In the first line, we use the default initializer. In the second line, we use the custom initializer we defined earlier.

4. Defining a Structure

Structures are surprisingly similar to classes, but there are a few key differences. Let's start by defining a basic structure.

At first glance, the only difference is the use of the struct keyword instead of the class keyword. The example also shows us an alternative approach to supply initial values to properties. Instead of setting an initial value for each property, we can give properties an initial value in the initializer of the structure. Swift won't throw an error, because it also inspects the initializer to determine the initial value—and type—of each property.

5. Classes and Structures

You may start to wonder what the difference is between classes and structures. At first glance, they look identical in form and function, with the exception of the class and struct keywords. There are a number of key differences though.

Inheritance

Classes support inheritance whereas structures don't. The following example illustrates this. The inheritance design pattern is indispensable in object-oriented programming and, in Swift, it's a key difference between classes and structures.

In the above example, the Person class is the parent or superclass of the Student class. This means that the Student class inherits the properties and behavior of the Person class. The last line illustrates this. We initialize a Student instance by invoking the custom initializer we defined earlier in the Person class.

Copying and Referencing

The following concept is probably the most important concept in Swift you'll learn today, the difference between value and reference types. Structures are value types, which means that they are passed by value. An example illustrates this concept best.

We define a structure, Point, to encapsulate the data to store a coordinate in a two-dimensional space. We instantiate point1 with x equal to 0 and y equal to 0. We assign point1 to point2 and set the x coordinate of point1 to 10. If we output the x coordinate of both points, we discover that they are not equal.

Structures are passed by value while classes are passed by reference. If you plan to continue working with Swift, you need to understand the previous statement. When we assigned point1 to point2, Swift created a copy of point1 and assigned it to point2. In other words, point1 and point2 each point to a different instance of the Point structure.

Let's now repeat this exercise with the Person class. In the following example, we instantiate a Person instance, set its properties, assign person1 to person2, and update the firstName property of person1. To see what passing by reference means for classes, we output the value of the firstName property of both Person instances.

The example proves that classes are reference types. This means that person1 and person2 refer to or reference the same Person instance. By assigning person1 to person2, Swift doesn't create a copy of person1. The person2 variable points to the same Person instance person1 is pointing to. Changing the firstName property of person1 also affects the firstName property of person2, because they are referencing the same Person instance.

As I mentioned several times in this article, classes and structures are very similar. What separates classes and structures is very important. If the above concepts aren't clear, then I encourage you to read the article one more time to let the concepts we covered sink in.

Learn More in Our Swift Programming Course

If you're interested in taking your Swift education to the next level, you can take a look at our full course on Swift development.

Conclusion

In this installment of Swift from Scratch, we've started exploring the basics of object-oriented programming in Swift. Classes and structures are the fundamental building blocks of most Swift projects and we'll learn more about them in the next few lessons of this series.

In the next article, we continue our exploration of classes and structures by taking a closer look at properties and inheritance.

2015-01-30T18:15:42.000Z2015-01-30T18:15:42.000ZBart Jacobs

Create a Ringtone Randomizer on Android

$
0
0

Android users are always on the lookout for apps that can alter the behavior of their devices in new and innovative ways. The Android platform gives its developers a lot of freedom to build such apps. In this tutorial, you will learn how to create an app that randomizes the ringtone of an Android phone every time it receives a call.

Prerequisites

If you'd like to follow along, then make sure you have the latest version of Android Studio installed. You can get it from the Android Developer website.

Because this is an intermediate tutorial, I won't cover the basics in too much detail. I assume that you have already created one or more Android apps and are familiar with the basics of the Android SDK.

1. Create a New Project

Start Android Studio and create a new project. Set the name of the application to RingtoneRandomizer. Make sure you choose a unique package name.

This app can run on all phones that have API level 8 or higher, so set the minimum SDK to Android 2.2.

Next, choose Add No Activity and click Finish.

2. Edit Manifest

Our app will need the following permissions:

  • android.permission.READ_PHONE_STATE to detect incoming calls
  • android.permission.WRITE_SETTINGS to change the default ringtone setting
  • android.permission.READ_EXTERNAL_STORAGE to fetch the list of available ringtones

Add the following to AndroidManifest.xml:

This app has one Activity, to allow the user to activate/deactivate the ringtone changing behavior.

It also has a BroadcastReceiver to detect call state changes. As shown below, the intent action that it listens to is android.intent.action.PHONE_STATE.

3. Edit strings.xml

The strings.xml file contains the strings the app uses. Update values/strings.xml as shown below:

4. Create Activity Layout

The Activity needs the following views:

  • ToggleButton to activate/deactivate the ringtone randomizer
  • ListView to display all available ringtones
  • TextView that acts as a label

Create a file named layout/activity_main.xml and replace its contents with the following. As you can see, the layout is pretty simple and straightforward.

5. Create RingtoneHelper Helper Class

In order to avoid dealing with the RingtoneManager directly in the Activity or the BroadcastReceiver, we're going to create a helper class named RingtoneHelper.

The RingtoneHelper class will have two static methods that make use of the RingtoneManager class.

fetchAvailableRingtones

The fetchAvailableRingtones method fetches the list of available ringtones, returning a List of Ringtone objects.

In fetchAvailableRingtones method, we start by creating an instance of the RingtoneManager class. The RingtoneManager object can list all the sounds available on the device. This includes the sounds for alarms and other notifications.

We use the setType method to set its type to TYPE_RINGTONE as we are only interested in ringtones.

We then invoke the getCount method to know how many ringtones are available and call the getRingtone method in a for loop, adding each ringtone to ringtones.

changeRingtone

The changeRingtone method is responsible for changing the ringtone of the device, the core feature of our app.

We first check in SharedPreferences if the user has activated the ringtone randomizer. We then use the Random class to pick a random number that's less than the number of available ringtones.

The getRingtoneUri method is invoked to fetch the URI of the corresponding ringtone and pass it to the setActualDefaultRingtoneUri method to change the ringtone.

6. Create Broadcast Receiver

Create a new class named RingReceiver that inherits from BroadcastReceiver. The new class will have only one method named onReceive. In this method, all we do is call the helper class's changeRingtone method if the following criteria are met:

  • the action of the received Intent is equal to TelephonyManager.ACTION_PHONE_STATE_CHANGED
  • the value of the lookup key EXTRA_STATE is equal to TelephonyManager.EXTRA_STATE_RINGING

This is what the RingReceiver class should look like:

7. Create Activity

Create a new class named MainActivity that inherits from Activity. We override the onCreate method and perform the following actions:

  • invoke setContentView to use the layout defined in activity_main.xml
  • call the helper class's fetchAvailableRingtones method to populate a List of ringtones
  • initialize the ListView
  • initialize the ToggleButton

The MainActivity class should now look something like this:

initializeToggle

In the initializeToggle method we set the state of the toggle button based on a boolean value named active in SharedPreferences. This value is set to false by default.

We also add an OnCheckedChangeListener to the toggle button to update the value in SharedPreferences. The putBoolean and commit methods of the Editor are used to accomplish this.

initializeList

The initializeList method creates an Adapter based on the List of ringtones. Use android.R.layout.simple_list_item_1 as the layout of the items of the ListView. It is nothing but a TextView. It should display the title of the ringtone, using the Ringtone class's getTitle method. This should be done inside the getView method of the Adapter, after overriding it.

Once the Adapter is ready, assign it to the ListView by using the ListView's setAdapter method.

8. Compile and Run

Our app is now ready to be deployed on an Android phone. You should be able to see all the ringtones available on your phone when you start the app. Click on the toggle button to activate the randomizer.

This is what the final result could look like

Call yourself from another phone a couple of times. The first time you receive a call, your original ringtone will be played. From the next call onwards, you will hear a random ringtone every time.

Note that this app changes the default ringtone of your phone. If you have assigned a specific ringtone to a contact or a group of contacts, that ringtone will still be used.

Conclusion

You now know how to make use of functionality available in the RingtoneManager class. You have also learned how to detect incoming calls. Feel free to build on this app to randomize other notifications in a similar manner. Visit the Android Developer website to learn more about the RingtoneManager class.

2015-02-02T16:45:27.000Z2015-02-02T16:45:27.000ZAshraff Hathibelagal

Create a Ringtone Randomizer on Android

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-22977

Android users are always on the lookout for apps that can alter the behavior of their devices in new and innovative ways. The Android platform gives its developers a lot of freedom to build such apps. In this tutorial, you will learn how to create an app that randomizes the ringtone of an Android phone every time it receives a call.

Prerequisites

If you'd like to follow along, then make sure you have the latest version of Android Studio installed. You can get it from the Android Developer website.

Because this is an intermediate tutorial, I won't cover the basics in too much detail. I assume that you have already created one or more Android apps and are familiar with the basics of the Android SDK.

1. Create a New Project

Start Android Studio and create a new project. Set the name of the application to RingtoneRandomizer. Make sure you choose a unique package name.

This app can run on all phones that have API level 8 or higher, so set the minimum SDK to Android 2.2.

Next, choose Add No Activity and click Finish.

2. Edit Manifest

Our app will need the following permissions:

  • android.permission.READ_PHONE_STATE to detect incoming calls
  • android.permission.WRITE_SETTINGS to change the default ringtone setting
  • android.permission.READ_EXTERNAL_STORAGE to fetch the list of available ringtones

Add the following to AndroidManifest.xml:

This app has one Activity, to allow the user to activate/deactivate the ringtone changing behavior.

It also has a BroadcastReceiver to detect call state changes. As shown below, the intent action that it listens to is android.intent.action.PHONE_STATE.

3. Edit strings.xml

The strings.xml file contains the strings the app uses. Update values/strings.xml as shown below:

4. Create Activity Layout

The Activity needs the following views:

  • ToggleButton to activate/deactivate the ringtone randomizer
  • ListView to display all available ringtones
  • TextView that acts as a label

Create a file named layout/activity_main.xml and replace its contents with the following. As you can see, the layout is pretty simple and straightforward.

5. Create RingtoneHelper Helper Class

In order to avoid dealing with the RingtoneManager directly in the Activity or the BroadcastReceiver, we're going to create a helper class named RingtoneHelper.

The RingtoneHelper class will have two static methods that make use of the RingtoneManager class.

fetchAvailableRingtones

The fetchAvailableRingtones method fetches the list of available ringtones, returning a List of Ringtone objects.

In fetchAvailableRingtones method, we start by creating an instance of the RingtoneManager class. The RingtoneManager object can list all the sounds available on the device. This includes the sounds for alarms and other notifications.

We use the setType method to set its type to TYPE_RINGTONE as we are only interested in ringtones.

We then invoke the getCount method to know how many ringtones are available and call the getRingtone method in a for loop, adding each ringtone to ringtones.

changeRingtone

The changeRingtone method is responsible for changing the ringtone of the device, the core feature of our app.

We first check in SharedPreferences if the user has activated the ringtone randomizer. We then use the Random class to pick a random number that's less than the number of available ringtones.

The getRingtoneUri method is invoked to fetch the URI of the corresponding ringtone and pass it to the setActualDefaultRingtoneUri method to change the ringtone.

6. Create Broadcast Receiver

Create a new class named RingReceiver that inherits from BroadcastReceiver. The new class will have only one method named onReceive. In this method, all we do is call the helper class's changeRingtone method if the following criteria are met:

  • the action of the received Intent is equal to TelephonyManager.ACTION_PHONE_STATE_CHANGED
  • the value of the lookup key EXTRA_STATE is equal to TelephonyManager.EXTRA_STATE_RINGING

This is what the RingReceiver class should look like:

7. Create Activity

Create a new class named MainActivity that inherits from Activity. We override the onCreate method and perform the following actions:

  • invoke setContentView to use the layout defined in activity_main.xml
  • call the helper class's fetchAvailableRingtones method to populate a List of ringtones
  • initialize the ListView
  • initialize the ToggleButton

The MainActivity class should now look something like this:

initializeToggle

In the initializeToggle method we set the state of the toggle button based on a boolean value named active in SharedPreferences. This value is set to false by default.

We also add an OnCheckedChangeListener to the toggle button to update the value in SharedPreferences. The putBoolean and commit methods of the Editor are used to accomplish this.

initializeList

The initializeList method creates an Adapter based on the List of ringtones. Use android.R.layout.simple_list_item_1 as the layout of the items of the ListView. It is nothing but a TextView. It should display the title of the ringtone, using the Ringtone class's getTitle method. This should be done inside the getView method of the Adapter, after overriding it.

Once the Adapter is ready, assign it to the ListView by using the ListView's setAdapter method.

8. Compile and Run

Our app is now ready to be deployed on an Android phone. You should be able to see all the ringtones available on your phone when you start the app. Click on the toggle button to activate the randomizer.

This is what the final result could look like

Call yourself from another phone a couple of times. The first time you receive a call, your original ringtone will be played. From the next call onwards, you will hear a random ringtone every time.

Note that this app changes the default ringtone of your phone. If you have assigned a specific ringtone to a contact or a group of contacts, that ringtone will still be used.

Conclusion

You now know how to make use of functionality available in the RingtoneManager class. You have also learned how to detect incoming calls. Feel free to build on this app to randomize other notifications in a similar manner. Visit the Android Developer website to learn more about the RingtoneManager class.

2015-02-02T16:45:27.000Z2015-02-02T16:45:27.000ZAshraff Hathibelagal

Identifying People With Qualcomm's Snapdragon SDK

$
0
0

It wasn't that long ago that taking photos was fairly expensive. Cameras required film with limited capacity and seeing the results also required additional time and more money. These inherent constraints ensured that we were selective with the photos we took.

Fast forward to today and these constraints have been diminished thanks to technology, but we are now faced with a new problem, filtering, organizing, and uncovering important photos from the many we take.

This new problem is what inspired this tutorial. In it, I will demonstrate how we can use new tools to help make the user's life easier by introducing new ways of filtering and organizing our content.

1. Concept

For this project, we're going to look at a different way of filtering through your collection of photos. Along the way, you'll learn how to integrate and use Qualcomm's Snapdragon SDK for facial processing and recognition.

We will enable the user to filter a collection of photos by identity/identities. The collection will be filtered by identities from a photo the user taps on, as demonstrated below.

Wireframes of the application showing a master detail pattern

2. Overview

The main focus of this post is the introduction of facial processing and recognition using Qualcomm's Snapdragon SDK whilst—hopefully—indirectly encouraging new ways of thinking and using derived metadata from content.

To avoid getting fixated in the plumbing, I have creating a template providing the basic service for scanning through the user's collection of photos and a grid for displaying the photos. Our goal is to enhance this with the concept proposed above.

In the following section, we will briefly review these components before moving onto introducing Qualcomm's Snapdragon SDK.

3. Skeleton

As mentioned above, our goal is to focus on the Snapdragon SDK so I have created a skeleton that has all the plumbing implemented. Below is a diagram and description of the project, which is available for download from GitHub.

Component model for the application broken down into Presentation Service and Data layers

Our data package contains an implementation of SQLiteOpenHelper (IdentityGalleryDatabase) responsible for creating and managing our database. The database will consist of three tables, one to act as a pointer to the media record (photo), another for detected identities (identity), and finally the relationship table connecting identities with their photos (identity_photo).

Highlevel database schema - 3 tables of photo identity and a relationship table identity_photo

We will use the identity table to store the attributes provided by the Snapdragon SDK, detailed in a later section of this tutorial.

Also included in the data package are a Provider (IdentityGalleryProvider) and Contract (IdentityGalleryContract) class, which is nothing more than a standard Provider acting as a wrapper of the SQLiteOpenHelper class.

To give you a sense of how to interact with the Provider class, the following code is taken from the TestProvider class. As the name suggests, it is used for testing the Provider class. 

The service package is responsible for iterating through, cataloguing, and eventually processing the images available via the MediaStore. The service itself extends the IntentService as an easy way of performing the processing on its own thread. The actual work is delegated to the GalleryScanner, which is the class we'll be extending for facial processing and recognition.

This GalleryScannerIntentService is instantiated each time the MainActivity is created with the following call:

When started, GalleryScannerIntentService fetches the last scan date and passes this into the constructor of the GalleryScanner. It then calls the scan method to start iterating through the contents of the MediaItem content provider—for items after the last scan date.

If you inspect the scan method of the GalleryScanner class, you'll notice that it's fairly verbose—nothing complicated is happening here. The method needs to query for media files stored internally (MediaStore.Images.Media.INTERNAL_CONTENT_URI) and externally (MediaStore.Images.Media.EXTERNAL_CONTENT_URI). Each item is then passed to a hook method, which is where we will place our code for facial processing and recognition.

Another two hook methods in the GalleryScanner class are available to us (as the method names suggest) to initialize and de-initialize the FacialProcessing instance.

The final package is the presentation package. As the name suggests, it hosts the Activity class responsible for rendering our gallery. The gallery is a GridView attached to a CursorAdapter. As explained above, tapping an item will query the database for any photos that contain one of the identities of the selected photo. For example, if you tap on a photo of your friend Lisa and her boyfriend Justin, the query will filter all photos that contain either or both Lisa and Justin.

4. Qualcomm's Snapdragon SDK

To help developers make their hardware look great and do it justice, Qualcomm has released an amazing set of SDKs, one being the Snapdragon SDK. The Snapdragon SDK exposes an optimized set of functions for facial processing.

The SDK is broadly split into two parts, facial processing and facial recognition. Given that not all devices support both—or any—of these features, which is probably the reason for having these features separated, the SDK provides an easy way of checking which features the device supports. We will cover this in more detail later.

The facial processing provides a way of extracting features from a photo (of a face) including:  

  • Blink Detection: Measure how open each eye is.
  • Gaze Tracking: Assess where the subject is looking.
  • Smile Value: Estimate the degree of the smile.
  • Face Orientation: Track the yaw, pitch, and roll of the head.

Facial recognition, as the name suggests, provides the ability to identify people in a photo. It's worth noting that all processing is done locally—as opposed to the cloud.

These features can be used real time (video/camera) or offline (gallery). In our exercise, we'll use these features offline, but there are minimal differences between the two approaches.

Consult the online documentation for supported devices to learn more about facial processing and facial recognition.

5. Adding Facial Processing and Recognition

In this section, we will be filling in those hook methods—with surprisingly few lines of code—to provide our application the ability to extract face properties and identify people. To work along, download the source from GitHub and open the project in Android Studio. Alternatively, you can download the completed project.

Step 1: Installing the Snapdragon SDK

The first thing we need to do is grab the SDK from Qualcomm's website. Note that you'll need to register/log in and agree with Qualcomm's terms and conditions.

Once downloaded, unarchive the contents and navigate to /Snapdragon_sdk_2.3.1/java/libs/libs_facial_processing/. Copy the sd-sdk-facial-processing.jar file into your project's /app/libs/ folder as shown below.

Android Studio Libs folder via the Project File panel

After copying the Snapdragon SDK, right-click the sd-sdk-facial-processing.jar and select Add as Library... from the list of options.

Android Studio File menu - Add As Library option

This will add the library as a dependency in your build.gradle file as shown below.

The final step is to add the native library. To do this, create a folder called jniLibs in your /app/src/main/ folder and copy the armeabi folder (from the SDK download) and its contents into it.

Android Studios Project folder view showing the structure where to put the native binaries

We are now ready to implement the logic to identify people using the functionality of the API. The following code snippets belong in the GalleryScanner class.

Step 2: Initialization

Let's first tackle the initialization hook method. 

We first need to check that the device supports both facial processing and facial recognition. If it doesn't, we throw an UnsupportedOperationException exception.

After that, we assign our local reference of the FacialProcessing class, mFacialProcessing, to a new instance using the factory method getInstance. This will return null if an instance is already in use, in which case the consumer is required to call release on that reference.

If we have successfully obtained an instance of a FacialProcessing object, we configure it by first setting the confidence. We do this using a local variable, which is 57 in this case from a range of 0 to 100. The confidence is a threshold when trying to resolve identities. Any matches below this threshold will be deemed as separate identities.

In terms of determining the value, as far as I can tell, this is a trial and error process. Obviously the higher the threshold, the more accurate the recognition, with the trade-off of increasing the number of false-positives.

We then set the FacialProcessing mode to FP_MODE_STILL. Your options here are either FP_MODE_STILL or FP_MODE_VIDEO. As the names suggest, one is optimized for still images while the other for continuous frames, both having obvious use cases.

P_MODE_STILL, as you might suspect, provides more accurate results. But as you will see later, FP_MODE_STILL is implied by the method we use to process the image so this line can be omitted. I only added it for completeness.

We then call loadAlbum (method of the GalleryScanner class), which is what we'll look at next.

The only interesting line here is:

Its counter method is:

A single FacialProcessing instance can be thought of as a session. Added persons (explained below) are stored locally (referred to as the "recognition album") within that instance. To allow your album to persist over multiple sessions, that is, each time you obtain a new instance, you need a way to persist and load them.

The serializeRecogntionAlbum method converts the album into a byte array and conversely the deserializeRecognitionAlbum will load and parse a previously stored album as a byte array.

Step 3: De-initialization

We now know how to initialize the FacialProcessing class for facial processing and recognition. Let's now turn our focus to de-initializing it by implementing the deinitFacialProcessing method.

As mentioned above, there can only be one instance of the FacialProcessing class at a time so we need to ensure we release it before finishing our task. We do this via a release method. But first we make the recognition album persist so that we can use the results over multiple sessions. In this case, when the user takes or receives new photos, we want to ensure that we use the previously recognized identities for the same people.

Step 4: Processing the Image

We're finally ready to flesh out the final hook method and use the FacialProcessing class. The following code blocks belong to the processImage method. I've split them up for clarity.

The method takes a reference to an instance of the ContentValues class, which holds the metadata for this image, along with the URI pointing to the image. We use this to load the image into memory.

The following code snippet is to replace the above comment // continued below (1).

As mentioned above, we first pass the static image to the FacialProcessing instance via the setBitmap method. Using this method implicitly uses the FP_MODE_STILL mode. This method returns True if the image was successfully processed and False if the processing failed.

The alternative method for processing streaming images (typically for camera preview frames) is:

Most of the parameters are obvious. You do have to pass in whether the frame is flipped (this is usually necessary for the front-facing camera) and if any rotation has been applied (usually set via the setDisplayOrientation method of a Camera instance).

We then query for the number of faces detected and only continue if at least one is found. The getFaceData method returns the details for each detected face as an array of FaceData objects, where each FaceData object encapsulates facial features including:

  • face boundary (FACE_RECT)
  • face, mouth, and eye locations (FACE_COORDINATES)
  • contour of the face (FACE_CONTOUR)
  • degree of smile (FACE_SMILE)
  • direction of eyes (FACE_GAZE)
  • flag indicating if either eye (or both eyes) is blinking (FACE_BLINK)
  • yaw, pitch, and roll of the face (FACE_ORIENTATION)
  • generated or derived identification (FACE_IDENTIFICATION)

There is an overload to this method which takes a set of enums (as described above) for feature points to be included, removing/minimizing redundant calculations.

We now move on to inspecting the FaceData object to extract the identity and features. Let's first see how facial recognition is done.

The following code snippet is to replace the above comment // continued below (2).

We first request the assigned person id via the getPersonId method. This will return -111 (FP_PERSON_NOT_REGISTERED) if no identity exists in the currently loaded album, otherwise returning the id of a matching person from the loaded album.

If no identity exists, then we add it via the addPerson method of the FacialProcessing object, passing it the index of the FaceData item we're currently inspecting. The method returns the assigned person id if successful, otherwise returning an error. This occurs when trying to add an identity that already exists.

Alternatively, when the person was matched with an identity stored in our loaded album, we call the FacialProcessing object's updatePerson method, passing it the existing id and index of the FaceData item. Adding a person multiple times increases recognition performance. You can add up to ten faces for a single person.

The final line simply returns the associated identity id from our database, inserting it if the person id doesn't already exist.

It's not shown above, but the FaceData instance exposes the method getRecognitionConfidence for returning the recognition confidence (0 to 100). Depending on your needs, you can use this to influence flow.

The final snippet demonstrates how to query each of the other features from the FaceData instance. In this demo, we don't make use of them, but with a little imagination I'm sure you can think of ways to put them to good use.

The following code snippet is to replace the above comment // continued below (3).

That completes the processing code. If you return to the gallery and tap on an image, you should see it filtering out any photos that do not contain any people identified in the selected photo.

Conclusion

We started this tutorial talking about how technology can be used to help organize the user's content. In context aware computing, whose goal is to use context as an implicit cue to enrich the impoverished interaction from humans to computers, making it easier to interact with computers, this is known as auto-tagging. By marking up content with more meaningful and useful data—for both the computer and us—we allow for more intelligent filtering and processing.

We've seen this used frequently with textual content, the most obvious example being spam filters and, more recently, news readers, but less so with rich media content, such as photos, music, and video. Tools like the Snapdragon SDK provide us with an opportunity to extract meaningful features from rich media, exposing its properties to the user and computer.

It's not hard to imagine how you could extend our application to allow filtering based on sentiment by using a smile as the major feature or social activity by counting the number of faces. One such implementation can be seen in this Smart Gallery feature.

2015-02-04T16:15:24.000Z2015-02-04T16:15:24.000ZJoshua Newnham

Identifying People With Qualcomm's Snapdragon SDK

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-22831

It wasn't that long ago that taking photos was fairly expensive. Cameras required film with limited capacity and seeing the results also required additional time and more money. These inherent constraints ensured that we were selective with the photos we took.

Fast forward to today and these constraints have been diminished thanks to technology, but we are now faced with a new problem, filtering, organizing, and uncovering important photos from the many we take.

This new problem is what inspired this tutorial. In it, I will demonstrate how we can use new tools to help make the user's life easier by introducing new ways of filtering and organizing our content.

1. Concept

For this project, we're going to look at a different way of filtering through your collection of photos. Along the way, you'll learn how to integrate and use Qualcomm's Snapdragon SDK for facial processing and recognition.

We will enable the user to filter a collection of photos by identity/identities. The collection will be filtered by identities from a photo the user taps on, as demonstrated below.

Wireframes of the application showing a master detail pattern

2. Overview

The main focus of this post is the introduction of facial processing and recognition using Qualcomm's Snapdragon SDK whilst—hopefully—indirectly encouraging new ways of thinking and using derived metadata from content.

To avoid getting fixated in the plumbing, I have creating a template providing the basic service for scanning through the user's collection of photos and a grid for displaying the photos. Our goal is to enhance this with the concept proposed above.

In the following section, we will briefly review these components before moving onto introducing Qualcomm's Snapdragon SDK.

3. Skeleton

As mentioned above, our goal is to focus on the Snapdragon SDK so I have created a skeleton that has all the plumbing implemented. Below is a diagram and description of the project, which is available for download from GitHub.

Component model for the application broken down into Presentation Service and Data layers

Our data package contains an implementation of SQLiteOpenHelper (IdentityGalleryDatabase) responsible for creating and managing our database. The database will consist of three tables, one to act as a pointer to the media record (photo), another for detected identities (identity), and finally the relationship table connecting identities with their photos (identity_photo).

Highlevel database schema - 3 tables of photo identity and a relationship table identity_photo

We will use the identity table to store the attributes provided by the Snapdragon SDK, detailed in a later section of this tutorial.

Also included in the data package are a Provider (IdentityGalleryProvider) and Contract (IdentityGalleryContract) class, which is nothing more than a standard Provider acting as a wrapper of the SQLiteOpenHelper class.

To give you a sense of how to interact with the Provider class, the following code is taken from the TestProvider class. As the name suggests, it is used for testing the Provider class. 

The service package is responsible for iterating through, cataloguing, and eventually processing the images available via the MediaStore. The service itself extends the IntentService as an easy way of performing the processing on its own thread. The actual work is delegated to the GalleryScanner, which is the class we'll be extending for facial processing and recognition.

This GalleryScannerIntentService is instantiated each time the MainActivity is created with the following call:

When started, GalleryScannerIntentService fetches the last scan date and passes this into the constructor of the GalleryScanner. It then calls the scan method to start iterating through the contents of the MediaItem content provider—for items after the last scan date.

If you inspect the scan method of the GalleryScanner class, you'll notice that it's fairly verbose—nothing complicated is happening here. The method needs to query for media files stored internally (MediaStore.Images.Media.INTERNAL_CONTENT_URI) and externally (MediaStore.Images.Media.EXTERNAL_CONTENT_URI). Each item is then passed to a hook method, which is where we will place our code for facial processing and recognition.

Another two hook methods in the GalleryScanner class are available to us (as the method names suggest) to initialize and de-initialize the FacialProcessing instance.

The final package is the presentation package. As the name suggests, it hosts the Activity class responsible for rendering our gallery. The gallery is a GridView attached to a CursorAdapter. As explained above, tapping an item will query the database for any photos that contain one of the identities of the selected photo. For example, if you tap on a photo of your friend Lisa and her boyfriend Justin, the query will filter all photos that contain either or both Lisa and Justin.

4. Qualcomm's Snapdragon SDK

To help developers make their hardware look great and do it justice, Qualcomm has released an amazing set of SDKs, one being the Snapdragon SDK. The Snapdragon SDK exposes an optimized set of functions for facial processing.

The SDK is broadly split into two parts, facial processing and facial recognition. Given that not all devices support both—or any—of these features, which is probably the reason for having these features separated, the SDK provides an easy way of checking which features the device supports. We will cover this in more detail later.

The facial processing provides a way of extracting features from a photo (of a face) including:  

  • Blink Detection: Measure how open each eye is.
  • Gaze Tracking: Assess where the subject is looking.
  • Smile Value: Estimate the degree of the smile.
  • Face Orientation: Track the yaw, pitch, and roll of the head.

Facial recognition, as the name suggests, provides the ability to identify people in a photo. It's worth noting that all processing is done locally—as opposed to the cloud.

These features can be used real time (video/camera) or offline (gallery). In our exercise, we'll use these features offline, but there are minimal differences between the two approaches.

Consult the online documentation for supported devices to learn more about facial processing and facial recognition.

5. Adding Facial Processing and Recognition

In this section, we will be filling in those hook methods—with surprisingly few lines of code—to provide our application the ability to extract face properties and identify people. To work along, download the source from GitHub and open the project in Android Studio. Alternatively, you can download the completed project.

Step 1: Installing the Snapdragon SDK

The first thing we need to do is grab the SDK from Qualcomm's website. Note that you'll need to register/log in and agree with Qualcomm's terms and conditions.

Once downloaded, unarchive the contents and navigate to /Snapdragon_sdk_2.3.1/java/libs/libs_facial_processing/. Copy the sd-sdk-facial-processing.jar file into your project's /app/libs/ folder as shown below.

Android Studio Libs folder via the Project File panel

After copying the Snapdragon SDK, right-click the sd-sdk-facial-processing.jar and select Add as Library... from the list of options.

Android Studio File menu - Add As Library option

This will add the library as a dependency in your build.gradle file as shown below.

The final step is to add the native library. To do this, create a folder called jniLibs in your /app/src/main/ folder and copy the armeabi folder (from the SDK download) and its contents into it.

Android Studios Project folder view showing the structure where to put the native binaries

We are now ready to implement the logic to identify people using the functionality of the API. The following code snippets belong in the GalleryScanner class.

Step 2: Initialization

Let's first tackle the initialization hook method. 

We first need to check that the device supports both facial processing and facial recognition. If it doesn't, we throw an UnsupportedOperationException exception.

After that, we assign our local reference of the FacialProcessing class, mFacialProcessing, to a new instance using the factory method getInstance. This will return null if an instance is already in use, in which case the consumer is required to call release on that reference.

If we have successfully obtained an instance of a FacialProcessing object, we configure it by first setting the confidence. We do this using a local variable, which is 57 in this case from a range of 0 to 100. The confidence is a threshold when trying to resolve identities. Any matches below this threshold will be deemed as separate identities.

In terms of determining the value, as far as I can tell, this is a trial and error process. Obviously the higher the threshold, the more accurate the recognition, with the trade-off of increasing the number of false-positives.

We then set the FacialProcessing mode to FP_MODE_STILL. Your options here are either FP_MODE_STILL or FP_MODE_VIDEO. As the names suggest, one is optimized for still images while the other for continuous frames, both having obvious use cases.

P_MODE_STILL, as you might suspect, provides more accurate results. But as you will see later, FP_MODE_STILL is implied by the method we use to process the image so this line can be omitted. I only added it for completeness.

We then call loadAlbum (method of the GalleryScanner class), which is what we'll look at next.

The only interesting line here is:

Its counter method is:

A single FacialProcessing instance can be thought of as a session. Added persons (explained below) are stored locally (referred to as the "recognition album") within that instance. To allow your album to persist over multiple sessions, that is, each time you obtain a new instance, you need a way to persist and load them.

The serializeRecogntionAlbum method converts the album into a byte array and conversely the deserializeRecognitionAlbum will load and parse a previously stored album as a byte array.

Step 3: De-initialization

We now know how to initialize the FacialProcessing class for facial processing and recognition. Let's now turn our focus to de-initializing it by implementing the deinitFacialProcessing method.

As mentioned above, there can only be one instance of the FacialProcessing class at a time so we need to ensure we release it before finishing our task. We do this via a release method. But first we make the recognition album persist so that we can use the results over multiple sessions. In this case, when the user takes or receives new photos, we want to ensure that we use the previously recognized identities for the same people.

Step 4: Processing the Image

We're finally ready to flesh out the final hook method and use the FacialProcessing class. The following code blocks belong to the processImage method. I've split them up for clarity.

The method takes a reference to an instance of the ContentValues class, which holds the metadata for this image, along with the URI pointing to the image. We use this to load the image into memory.

The following code snippet is to replace the above comment // continued below (1).

As mentioned above, we first pass the static image to the FacialProcessing instance via the setBitmap method. Using this method implicitly uses the FP_MODE_STILL mode. This method returns True if the image was successfully processed and False if the processing failed.

The alternative method for processing streaming images (typically for camera preview frames) is:

Most of the parameters are obvious. You do have to pass in whether the frame is flipped (this is usually necessary for the front-facing camera) and if any rotation has been applied (usually set via the setDisplayOrientation method of a Camera instance).

We then query for the number of faces detected and only continue if at least one is found. The getFaceData method returns the details for each detected face as an array of FaceData objects, where each FaceData object encapsulates facial features including:

  • face boundary (FACE_RECT)
  • face, mouth, and eye locations (FACE_COORDINATES)
  • contour of the face (FACE_CONTOUR)
  • degree of smile (FACE_SMILE)
  • direction of eyes (FACE_GAZE)
  • flag indicating if either eye (or both eyes) is blinking (FACE_BLINK)
  • yaw, pitch, and roll of the face (FACE_ORIENTATION)
  • generated or derived identification (FACE_IDENTIFICATION)

There is an overload to this method which takes a set of enums (as described above) for feature points to be included, removing/minimizing redundant calculations.

We now move on to inspecting the FaceData object to extract the identity and features. Let's first see how facial recognition is done.

The following code snippet is to replace the above comment // continued below (2).

We first request the assigned person id via the getPersonId method. This will return -111 (FP_PERSON_NOT_REGISTERED) if no identity exists in the currently loaded album, otherwise returning the id of a matching person from the loaded album.

If no identity exists, then we add it via the addPerson method of the FacialProcessing object, passing it the index of the FaceData item we're currently inspecting. The method returns the assigned person id if successful, otherwise returning an error. This occurs when trying to add an identity that already exists.

Alternatively, when the person was matched with an identity stored in our loaded album, we call the FacialProcessing object's updatePerson method, passing it the existing id and index of the FaceData item. Adding a person multiple times increases recognition performance. You can add up to ten faces for a single person.

The final line simply returns the associated identity id from our database, inserting it if the person id doesn't already exist.

It's not shown above, but the FaceData instance exposes the method getRecognitionConfidence for returning the recognition confidence (0 to 100). Depending on your needs, you can use this to influence flow.

The final snippet demonstrates how to query each of the other features from the FaceData instance. In this demo, we don't make use of them, but with a little imagination I'm sure you can think of ways to put them to good use.

The following code snippet is to replace the above comment // continued below (3).

That completes the processing code. If you return to the gallery and tap on an image, you should see it filtering out any photos that do not contain any people identified in the selected photo.

Conclusion

We started this tutorial talking about how technology can be used to help organize the user's content. In context aware computing, whose goal is to use context as an implicit cue to enrich the impoverished interaction from humans to computers, making it easier to interact with computers, this is known as auto-tagging. By marking up content with more meaningful and useful data—for both the computer and us—we allow for more intelligent filtering and processing.

We've seen this used frequently with textual content, the most obvious example being spam filters and, more recently, news readers, but less so with rich media content, such as photos, music, and video. Tools like the Snapdragon SDK provide us with an opportunity to extract meaningful features from rich media, exposing its properties to the user and computer.

It's not hard to imagine how you could extend our application to allow filtering based on sentiment by using a smile as the major feature or social activity by counting the number of faces. One such implementation can be seen in this Smart Gallery feature.

2015-02-04T16:15:24.000Z2015-02-04T16:15:24.000ZJoshua Newnham

Quick Tip: Enumerations in Swift

$
0
0

Enumerations are a common design pattern in many programming languages. While you may be familiar with enumerations in C and Objective-C, Swift's implementation of enumerations is significantly more powerful and flexible. In this quick tip, you'll learn what's special about enumerations in Swift, how to use them in your projects, and what makes them so powerful.

1. What Is an Enumeration?

Enumerations aren't new and they're certainly not unique to Swift. However, if you're familiar with enumerations in C, then you're going to love Swift's powerful take on enumerations.

If enums or enumerations are new to you, then you may not be familiar with what they have to offer. In Swift, enumerations are first class types that define a list of possible values for that type.

An example might be the possible states of a network connection. The possible states could be:

  • disconnected
  • connecting
  • connected

We could add a fourth state for the case the state is unknown. With this example in mind, let's see how to define and implement such an enumeration.

Basics

Like I said, enumerations are first class types in Swift. An enumeration definition looks very similar to a class or structure definition. In the example below, we define the ConnectionState enumeration.

The name of the enumeration is preceded by the enum keyword and followed by a pair of curly braces. The ConnectionState enumeration will define the possible states of a network connection. To define these states, we add member values or members to the enumeration's definition. The definition of a member value always starts with the case keyword.

In C or Objective-C, the above enumeration would look a bit different as illustrated in the example below. Each value of the enumeration corresponds with an integer, for example, ConnectionStateUnknown equals 0, ConnectionStateDisconnected equals 1, etc.

This isn't true in Swift. The members of an enumeration don't automatically correspond with an integer value. The members of the ConnectionState enumeration are values themselves and they are of type ConnectionState. This makes working with enumerations safer and more explicit.

Raw Values

It is possible to explicitly specify the values of the members of an enumeration. In the following example, the members of the ConnectionState enumeration have a raw value of type Int. Each member is assigned a raw value, corresponding with an integer.

Note that we specify the type of the raw values in the enumeration's definition and that no two member values can have the same raw value. If we only specify a value for the Unknown member, then Swift will automatically increment the value of the Unknown member and assign unique values to the other members of the enumeration. To better illustrate this, the below example is identical to the previous definition of the ConnectionState enumeration.

2. Working with Enumerations

Initialization

Using the ConnectionState enumeration is similar to using any other type in Swift. In the next example, we declare a variable, connectionState, and set its value to ConnectionState.Connecting.

The value of connectionState is ConnectionState.Connecting and the variable is of type ConnectionState.

Swift's type inference is very convenient when working with enumerations. Because we declared connectionState as being of type ConnectionState, we can now assign a new value by using the shorthand dot syntax for enumerations.

Control Flow

Using enumerations in an if or switch statement is straightforward. Remember that switch statements need to be exhaustive. Add a default case if necessary.

The following example demonstrates how the ConnectionState enum can be used. It also shows how to access the associated value of an enum member. The canConnect function accepts a ConnectionState instance and returns a Bool.

The canConnect function only returns true if the ConnectionState instance passed to the function is equal to .Connected and its associated value is an Int equal to 3000. Note that the associated value of the Connected member is available in the switch statement as a constant named port, which we can then use in the corresponding case.

3. Associated Values

Another compelling feature of Swift enums are associated values. Each member of an enum can have an associated value. Associated values are very flexible. For example, associated values of different members of the same enum don't need to be of the same type. Take a look at the following example to better understand the concept of associated values.

The Unknown and Disconnected members don't have an associated value. TheConnecting member has an associated value of type (Int, Double), specifying the port number and timeout interval of the connection. The Connected member has an associated value of type Int, specifying the port number.

It's important to understand that an associated value is linked to or associated with a member of the enumeration. The member's value remains unchanged. The next example illustrates how to create a ConnectionState instance with an associated value.

4. Methods and Value Types

Methods

Enumerations are pretty powerful in Swift. Enumerations can even define methods, such as an initializer to select a default member value if none was specified.

In this example, we initialize an instance of the ConnectionState enumeration without explicitly specifying a value for it. In the initializer of the enumeration, however, we set the instance to Unknown. The result is that the connectionState variable is equal to ConnectionState.Unknown.

Value Types

Like structures, enumerations are value types, which means that an enumeration is not passed by reference, like class instances, but by value. The following example illustrates this.

Even though we assign connectionState1 to connectionState2, the values of connectionState1 and connectionState2 are different at the end of the example.

When connectionState1 is assigned to connectionState2, Swift creates a copy of connectionState1 and assigns that to connectionState2. In other words, connectionState1 and connectionState2 refer to two different ConnectionState instances.

Conclusion

Enums in Swift are incredibly powerful compared to, for example, enums in C. One of the most powerful aspects of enumerations is that they are a first class types in Swift. Type safety is a key aspect of the Swift language and enumerations fit perfectly in that mindset.

2015-02-06T16:30:28.000Z2015-02-06T16:30:28.000ZBart Jacobs

Quick Tip: Enumerations in Swift

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-23205

Enumerations are a common design pattern in many programming languages. While you may be familiar with enumerations in C and Objective-C, Swift's implementation of enumerations is significantly more powerful and flexible. In this quick tip, you'll learn what's special about enumerations in Swift, how to use them in your projects, and what makes them so powerful.

1. What Is an Enumeration?

Enumerations aren't new and they're certainly not unique to Swift. However, if you're familiar with enumerations in C, then you're going to love Swift's powerful take on enumerations.

If enums or enumerations are new to you, then you may not be familiar with what they have to offer. In Swift, enumerations are first class types that define a list of possible values for that type.

An example might be the possible states of a network connection. The possible states could be:

  • disconnected
  • connecting
  • connected

We could add a fourth state for the case the state is unknown. With this example in mind, let's see how to define and implement such an enumeration.

Basics

Like I said, enumerations are first class types in Swift. An enumeration definition looks very similar to a class or structure definition. In the example below, we define the ConnectionState enumeration.

The name of the enumeration is preceded by the enum keyword and followed by a pair of curly braces. The ConnectionState enumeration will define the possible states of a network connection. To define these states, we add member values or members to the enumeration's definition. The definition of a member value always starts with the case keyword.

In C or Objective-C, the above enumeration would look a bit different as illustrated in the example below. Each value of the enumeration corresponds with an integer, for example, ConnectionStateUnknown equals 0, ConnectionStateDisconnected equals 1, etc.

This isn't true in Swift. The members of an enumeration don't automatically correspond with an integer value. The members of the ConnectionState enumeration are values themselves and they are of type ConnectionState. This makes working with enumerations safer and more explicit.

Raw Values

It is possible to explicitly specify the values of the members of an enumeration. In the following example, the members of the ConnectionState enumeration have a raw value of type Int. Each member is assigned a raw value, corresponding with an integer.

Note that we specify the type of the raw values in the enumeration's definition and that no two member values can have the same raw value. If we only specify a value for the Unknown member, then Swift will automatically increment the value of the Unknown member and assign unique values to the other members of the enumeration. To better illustrate this, the below example is identical to the previous definition of the ConnectionState enumeration.

2. Working with Enumerations

Initialization

Using the ConnectionState enumeration is similar to using any other type in Swift. In the next example, we declare a variable, connectionState, and set its value to ConnectionState.Connecting.

The value of connectionState is ConnectionState.Connecting and the variable is of type ConnectionState.

Swift's type inference is very convenient when working with enumerations. Because we declared connectionState as being of type ConnectionState, we can now assign a new value by using the shorthand dot syntax for enumerations.

Control Flow

Using enumerations in an if or switch statement is straightforward. Remember that switch statements need to be exhaustive. Add a default case if necessary.

The following example demonstrates how the ConnectionState enum can be used. It also shows how to access the associated value of an enum member. The canConnect function accepts a ConnectionState instance and returns a Bool.

The canConnect function only returns true if the ConnectionState instance passed to the function is equal to .Connected and its associated value is an Int equal to 3000. Note that the associated value of the Connected member is available in the switch statement as a constant named port, which we can then use in the corresponding case.

3. Associated Values

Another compelling feature of Swift enums are associated values. Each member of an enum can have an associated value. Associated values are very flexible. For example, associated values of different members of the same enum don't need to be of the same type. Take a look at the following example to better understand the concept of associated values.

The Unknown and Disconnected members don't have an associated value. TheConnecting member has an associated value of type (Int, Double), specifying the port number and timeout interval of the connection. The Connected member has an associated value of type Int, specifying the port number.

It's important to understand that an associated value is linked to or associated with a member of the enumeration. The member's value remains unchanged. The next example illustrates how to create a ConnectionState instance with an associated value.

4. Methods and Value Types

Methods

Enumerations are pretty powerful in Swift. Enumerations can even define methods, such as an initializer to select a default member value if none was specified.

In this example, we initialize an instance of the ConnectionState enumeration without explicitly specifying a value for it. In the initializer of the enumeration, however, we set the instance to Unknown. The result is that the connectionState variable is equal to ConnectionState.Unknown.

Value Types

Like structures, enumerations are value types, which means that an enumeration is not passed by reference, like class instances, but by value. The following example illustrates this.

Even though we assign connectionState1 to connectionState2, the values of connectionState1 and connectionState2 are different at the end of the example.

When connectionState1 is assigned to connectionState2, Swift creates a copy of connectionState1 and assigns that to connectionState2. In other words, connectionState1 and connectionState2 refer to two different ConnectionState instances.

Conclusion

Enums in Swift are incredibly powerful compared to, for example, enums in C. One of the most powerful aspects of enumerations is that they are a first class types in Swift. Type safety is a key aspect of the Swift language and enumerations fit perfectly in that mindset.

2015-02-06T16:30:28.000Z2015-02-06T16:30:28.000ZBart Jacobs

Localizing a Windows Phone 8 Application

$
0
0

Recent store trends show that offering your app in English will cover only about 25% of Windows Phone customers. Adding Spanish, French, Mandarin, Russian, and German can increase coverage to more than 75% of Windows Phone customers.

Introduction

In this tutorial, I will teach you how to localize a Windows Phone 8 app to reach more potential customers. I will show you how to set up your Windows Phone 8 project so that the entire user interface, including error messages, can be localized. At the end of the tutorial, you will have learned how to prepare your app so that it can be translated into multiple languages. 

Consider the following best practices to build an app that can be easily localized:

  • Create separate resource files for strings, images, and videos to make your code language independent. This ensures that it can support different languages.
  • Enable multiline support and text wrap in controls. This gives you more space for displaying strings.
  • Localize sentences rather than words. This may seem like extra work, but it's the best solution. It will ensure that, for example, error messages are properly translated in every language.
  • Don't assume that every language uses parameters in the same order.
  • Don't reuse strings, as it can cause localization problems if the context of the string changes. For example, a string like "text" and "fax" could be used as both a verb and a noun in English, which can complicate the translation process. Create a separate string for each context.
  • Use unique attributes to identify your resources. You can access a resource only by using its unique value, which doesn't change, rather than by using the actual value of the resource.

We will first discuss the culture and language support offered by Windows Phone 8 and then discuss the steps involved in preparing an app for localization. After that, we will see how to build a localized application bar. We will finally discuss how to test a localized app.  

1. Culture & Language Support

Numbers, currencies, date, and time are formatted differently in various cultures. Every supported culture is listed in the CultureInfo class. The CultureInfo class exposes properties to access region format data for a specific culture. You can also use the CultureInfo class with a culture code to access built-in formatting and sorting rules for that culture.

The display language determines the default user interface font. The Windows Phone 8.1 user interface is localized in 50 languages, but your app can display a much larger selection of languages. When you add support to your app for additional languages, Visual Studio generates an .resx file for each language.

The InitializeLanguage function in the App.xaml.cs file sets the app's RootFrame.Language based on the value of the AppResources.ResourceLanguage resource.

2. Standard Localization Steps

New projects and templates for Windows Phone 8 XAML apps provide several helpful new features:

  • A neutral language resource file, AppResources.resx, is added by default to every new project.
  • The LocalizedStrings helper class is already configured to provide easy access to the resources that match the current culture of an app.
  • A new resource file with locale-specific name and app language initialization parameters in place is created when adding a Supported Culture from the Project Properties in Visual Studio.

Let's see how all this works by creating a sample app, with English as the base language.

Step 1: Binding XAML Text Elements

We first bind the XAML text elements to string resources. 

Copy every hard-coded string in the app’s XAML that needs to be localized to a new row in the string table of your AppResources.resx file.

Next, refer the XAML elements to this string resource by adding the unique name and a standard binding clause in place of the hard-coded value. 

The TextBlock shown below is bound using the string resource in place of the hard-coded text.

Search through the project’s code-behind for places where the code modifies a text attribute of a user interface element. Replace the hard-coded value with a reference to the string resource for each element.

Step 2: Adding Languages

Adding languages to a Windows Phone 8 project in Visual Studio is simple. Navigate to the project’s property page and select the languages you want to support from the Supported Cultures list.

In the sample project, I have added Chinese (Simplified) and Spanish (Spain), whose locale codes are zh-Hans and es-ES, respectively. Select the languages you would like your app to support in the Supported Cultures box on the project’s Properties page.

Select the target languages in the Supported Cultures box

When you save the project, Visual Studio creates an AppResources.resx file for each locale. The newly created resource file is pre-populated with the existing resources from the main AppResources.resx file.

Resource files

Each resource file includes two special resources named ResourceLanguage and ResourceFlowDirection. These two resources are used when the InitializeLanguage method is called from the App.xaml.cs constructor. Their values are checked automatically to ensure that they match the culture of the resource file loaded at run time.

The ResourceLanguage value is initialized with the locale name of the resource file and is used to set the RootFrame.Language value. The ResourceFlowDirection value is set to the traditional direction of that resource language.

Note that ResourceLanguage and ResourceFlowDirection can be modified to align with your app's design style.

Step 3: Using the Multilingual App Toolkit for Translation

The Multilingual App Toolkit (MAT), which is integrated into Visual Studio, provides translation support, translation file management, and localization tools to create Windows Phone and Windows Store apps. Here are some advantages of using the Multilingual App Toolkit:

  • It centralizes string resource and metadata management.
  • The tool makes it easy to create, import, and export translation files in the XLIFF format, a standard in the industry.
  • There's no need to switch back and forth between resource files.
  • It is easy to add new translated languages right from the project's context menu.

To start using the Multilingual App Toolkit for your Windows Phone project, download and install the Visual Studio extension from MSDN. With the toolkit installed, select the project and select Enable Multilingual App Toolkit from the Tools menu as shown below.

Enable Multilingual App Toolkit in the Tools menu

Note that localized resources are defined per project, not per solution. This means that Visual Studio's focus must be within the project for the Enable Multilingual App Toolkit option to be available from the Tools menu.

After enabling the Multilingual App Toolkit, Visual Studio will add files to the Resource folder of your project, one of them having a new type and name, AppResources.qps.ploc.xlf. The .xlf extension refers to the XLIFF standard file format I mentioned earlier.

Generated xlf files on enabling MAT

Additional languages can be added via the Multilingual App Toolkit. This results in a new supported culture being added to your project. It also causes the addition of a supported language in the project's WMAppManifest.xml.

To translate the resources to Chinese (Simplified) and Spanish (Spain), right-click the .xlf file and select Generate machine translations from the contextual menu. Apply the translations and rebuild the project to see the changes being reflected in the .resx files for the two additional languages.

3. Building a Localized Application Bar

You can add an app bar to a page in your app, either in the page XAML or by using C# in the page code-behind. ApplicationBar is not a DependencyObject and doesn't support bindings. This means that if you need to localize it, then build the ApplicationBar from code-behind C#.

When a new Windows Phone 8 or a Windows Phone 8.1 Silverlight project is created, some commented code for localizing the application bar is added to MainPage.xaml.cs by default. Uncomment the method BuildLocalizedApplicationBar and add the buttons, menu items, and associated resources that you would like to use in your app.

The BuildLocalizedApplicationBar method creates a new instance of ApplicationBar. Buttons and menu items are added to it and the text value is set to the localized string from AppResources. The app bar button has a binding to AppBarButtonText, which is defined in the AppResouces.resx file. Call the BuildLocalizedApplicationBar method from page's constructor to load the ApplicationBar.

In the sample app, users can select their display language via the ApplicationBar menu. As shown in the screenshot below, the menu items stay consistent across display languages while the app bar button is localized.

App Bar Menu

When a user taps a display language in the ApplicationBar menu, the SetUILanguage method is called. The SetUILanguage method, with the name of the target language passed as a parameter, is used to set the display language.

The SetUILanguage method first resets the CurrentUICulture of the app to the locale supplied in the call. Any resource-bound text rendered by the app will use the resources of the specified locale after this call.

Next, set the FlowDirection and Language of the RootFrame, which will cause the user interface rendered by the app to follow the new settings.

The BuildLocalizedApplicationBar method is called from the page's constructor after the call to InitializeComponent to localize the ApplicationBar.

4. Testing a Localized App

The Windows Phone emulator can be used to test the localized app. Change the display language to the language that the app targets to verify that the content renders correctly.

  • Click Start debugging from the Debug menu.
  • Navigate to region & language from Settings in the app list.
  • Click Display language and select one of the languages. Use the table in the Mapping Culture Names to Display Languages section to determine which display language string to select.
  • Accept the changes and restart the emulator.
  • Launch your app by returning to the app list, and verify that the language of each localized string matches the display language setting that you selected earlier.

Here's how the text changes when the display language is changed. Notice that the menu items remain consistent even if the display language changes.

App Screenshots

To dynamically change the language, call the SetUILanguage method with the name of the target language passed as a parameter.

In the sample app, every user interface element of the MainPage.xaml has already been rendered, so the currently displayed elements need to be refreshed after changing the language. The updateUI method refreshes the user interface elements of MainPage.xaml.

Note that the placement of each user interface element on the screen is unchanged, regardless of the display language.

Conclusion

In this tutorial, you have learned how to prepare your app for localization. Hard-coded XAML and code-behind strings that need to be localized are placed in a resource file string table and given a unique key.

Each hard-coded value is replaced by a binding clause in XAML or a resource reference in code using the key for its related string resource.

You can add additional display languages and use the Multilingual App Toolkit to make the end-to-end process of translating your app substantially easier. Feel free to download the tutorial's source files to use as a reference.

2015-02-09T16:45:33.000Z2015-02-09T16:45:33.000ZVivek Maskara

Localizing a Windows Phone 8 Application

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-22989

Recent store trends show that offering your app in English will cover only about 25% of Windows Phone customers. Adding Spanish, French, Mandarin, Russian, and German can increase coverage to more than 75% of Windows Phone customers.

Introduction

In this tutorial, I will teach you how to localize a Windows Phone 8 app to reach more potential customers. I will show you how to set up your Windows Phone 8 project so that the entire user interface, including error messages, can be localized. At the end of the tutorial, you will have learned how to prepare your app so that it can be translated into multiple languages. 

Consider the following best practices to build an app that can be easily localized:

  • Create separate resource files for strings, images, and videos to make your code language independent. This ensures that it can support different languages.
  • Enable multiline support and text wrap in controls. This gives you more space for displaying strings.
  • Localize sentences rather than words. This may seem like extra work, but it's the best solution. It will ensure that, for example, error messages are properly translated in every language.
  • Don't assume that every language uses parameters in the same order.
  • Don't reuse strings, as it can cause localization problems if the context of the string changes. For example, a string like "text" and "fax" could be used as both a verb and a noun in English, which can complicate the translation process. Create a separate string for each context.
  • Use unique attributes to identify your resources. You can access a resource only by using its unique value, which doesn't change, rather than by using the actual value of the resource.

We will first discuss the culture and language support offered by Windows Phone 8 and then discuss the steps involved in preparing an app for localization. After that, we will see how to build a localized application bar. We will finally discuss how to test a localized app.  

1. Culture & Language Support

Numbers, currencies, date, and time are formatted differently in various cultures. Every supported culture is listed in the CultureInfo class. The CultureInfo class exposes properties to access region format data for a specific culture. You can also use the CultureInfo class with a culture code to access built-in formatting and sorting rules for that culture.

The display language determines the default user interface font. The Windows Phone 8.1 user interface is localized in 50 languages, but your app can display a much larger selection of languages. When you add support to your app for additional languages, Visual Studio generates an .resx file for each language.

The InitializeLanguage function in the App.xaml.cs file sets the app's RootFrame.Language based on the value of the AppResources.ResourceLanguage resource.

2. Standard Localization Steps

New projects and templates for Windows Phone 8 XAML apps provide several helpful new features:

  • A neutral language resource file, AppResources.resx, is added by default to every new project.
  • The LocalizedStrings helper class is already configured to provide easy access to the resources that match the current culture of an app.
  • A new resource file with locale-specific name and app language initialization parameters in place is created when adding a Supported Culture from the Project Properties in Visual Studio.

Let's see how all this works by creating a sample app, with English as the base language.

Step 1: Binding XAML Text Elements

We first bind the XAML text elements to string resources. 

Copy every hard-coded string in the app’s XAML that needs to be localized to a new row in the string table of your AppResources.resx file.

Next, refer the XAML elements to this string resource by adding the unique name and a standard binding clause in place of the hard-coded value. 

The TextBlock shown below is bound using the string resource in place of the hard-coded text.

Search through the project’s code-behind for places where the code modifies a text attribute of a user interface element. Replace the hard-coded value with a reference to the string resource for each element.

Step 2: Adding Languages

Adding languages to a Windows Phone 8 project in Visual Studio is simple. Navigate to the project’s property page and select the languages you want to support from the Supported Cultures list.

In the sample project, I have added Chinese (Simplified) and Spanish (Spain), whose locale codes are zh-Hans and es-ES, respectively. Select the languages you would like your app to support in the Supported Cultures box on the project’s Properties page.

Select the target languages in the Supported Cultures box

When you save the project, Visual Studio creates an AppResources.resx file for each locale. The newly created resource file is pre-populated with the existing resources from the main AppResources.resx file.

Resource files

Each resource file includes two special resources named ResourceLanguage and ResourceFlowDirection. These two resources are used when the InitializeLanguage method is called from the App.xaml.cs constructor. Their values are checked automatically to ensure that they match the culture of the resource file loaded at run time.

The ResourceLanguage value is initialized with the locale name of the resource file and is used to set the RootFrame.Language value. The ResourceFlowDirection value is set to the traditional direction of that resource language.

Note that ResourceLanguage and ResourceFlowDirection can be modified to align with your app's design style.

Step 3: Using the Multilingual App Toolkit for Translation

The Multilingual App Toolkit (MAT), which is integrated into Visual Studio, provides translation support, translation file management, and localization tools to create Windows Phone and Windows Store apps. Here are some advantages of using the Multilingual App Toolkit:

  • It centralizes string resource and metadata management.
  • The tool makes it easy to create, import, and export translation files in the XLIFF format, a standard in the industry.
  • There's no need to switch back and forth between resource files.
  • It is easy to add new translated languages right from the project's context menu.

To start using the Multilingual App Toolkit for your Windows Phone project, download and install the Visual Studio extension from MSDN. With the toolkit installed, select the project and select Enable Multilingual App Toolkit from the Tools menu as shown below.

Enable Multilingual App Toolkit in the Tools menu

Note that localized resources are defined per project, not per solution. This means that Visual Studio's focus must be within the project for the Enable Multilingual App Toolkit option to be available from the Tools menu.

After enabling the Multilingual App Toolkit, Visual Studio will add files to the Resource folder of your project, one of them having a new type and name, AppResources.qps.ploc.xlf. The .xlf extension refers to the XLIFF standard file format I mentioned earlier.

Generated xlf files on enabling MAT

Additional languages can be added via the Multilingual App Toolkit. This results in a new supported culture being added to your project. It also causes the addition of a supported language in the project's WMAppManifest.xml.

To translate the resources to Chinese (Simplified) and Spanish (Spain), right-click the .xlf file and select Generate machine translations from the contextual menu. Apply the translations and rebuild the project to see the changes being reflected in the .resx files for the two additional languages.

3. Building a Localized Application Bar

You can add an app bar to a page in your app, either in the page XAML or by using C# in the page code-behind. ApplicationBar is not a DependencyObject and doesn't support bindings. This means that if you need to localize it, then build the ApplicationBar from code-behind C#.

When a new Windows Phone 8 or a Windows Phone 8.1 Silverlight project is created, some commented code for localizing the application bar is added to MainPage.xaml.cs by default. Uncomment the method BuildLocalizedApplicationBar and add the buttons, menu items, and associated resources that you would like to use in your app.

The BuildLocalizedApplicationBar method creates a new instance of ApplicationBar. Buttons and menu items are added to it and the text value is set to the localized string from AppResources. The app bar button has a binding to AppBarButtonText, which is defined in the AppResouces.resx file. Call the BuildLocalizedApplicationBar method from page's constructor to load the ApplicationBar.

In the sample app, users can select their display language via the ApplicationBar menu. As shown in the screenshot below, the menu items stay consistent across display languages while the app bar button is localized.

App Bar Menu

When a user taps a display language in the ApplicationBar menu, the SetUILanguage method is called. The SetUILanguage method, with the name of the target language passed as a parameter, is used to set the display language.

The SetUILanguage method first resets the CurrentUICulture of the app to the locale supplied in the call. Any resource-bound text rendered by the app will use the resources of the specified locale after this call.

Next, set the FlowDirection and Language of the RootFrame, which will cause the user interface rendered by the app to follow the new settings.

The BuildLocalizedApplicationBar method is called from the page's constructor after the call to InitializeComponent to localize the ApplicationBar.

4. Testing a Localized App

The Windows Phone emulator can be used to test the localized app. Change the display language to the language that the app targets to verify that the content renders correctly.

  • Click Start debugging from the Debug menu.
  • Navigate to region & language from Settings in the app list.
  • Click Display language and select one of the languages. Use the table in the Mapping Culture Names to Display Languages section to determine which display language string to select.
  • Accept the changes and restart the emulator.
  • Launch your app by returning to the app list, and verify that the language of each localized string matches the display language setting that you selected earlier.

Here's how the text changes when the display language is changed. Notice that the menu items remain consistent even if the display language changes.

App Screenshots

To dynamically change the language, call the SetUILanguage method with the name of the target language passed as a parameter.

In the sample app, every user interface element of the MainPage.xaml has already been rendered, so the currently displayed elements need to be refreshed after changing the language. The updateUI method refreshes the user interface elements of MainPage.xaml.

Note that the placement of each user interface element on the screen is unchanged, regardless of the display language.

Conclusion

In this tutorial, you have learned how to prepare your app for localization. Hard-coded XAML and code-behind strings that need to be localized are placed in a resource file string table and given a unique key.

Each hard-coded value is replaced by a binding clause in XAML or a resource reference in code using the key for its related string resource.

You can add additional display languages and use the Multilingual App Toolkit to make the end-to-end process of translating your app substantially easier. Feel free to download the tutorial's source files to use as a reference.

2015-02-09T16:45:33.000Z2015-02-09T16:45:33.000ZVivek Maskara

An Introduction to Xamarin.Forms and SQLite

$
0
0

At some point in your mobile development career, you are going to need to deal with data. Dealing with data means more than processing and displaying information to the end user. You are going to need to store this information somewhere and be able to get at it easily. Thanks to Xamarin and open source software, you can easily store your data with an industry tested platform, SQLite.

1. Storing Data

So why do you need to care about data when it comes to your app? Because it is all around you. You can't escape it. No matter what kind of app you are writing, whether it's a game or some sort of utility, you are going to need to store data at some point. That data could be user data, statistics, or anything else of interest that either you or the user will be interested in at some point in the use of your app.

At this point, let's assume that you have decided to go the Xamarin.Forms route, because you are interested in targeting several platforms not only in the logic for your app, but also for the user interface layer.

Great. But what do you do now that you need to store information within your app? Don't worry, there's a very simple solution to this problem, SQLite.

2. Introduction to SQLite

You have now seen the term SQLite a couple of times in this tutorial, it's time to get down to the meat. What exactly is SQLite? SQLite is a public domain, zero configuration, transactional SQL database engine. All this means is that you have a full featured mechanism to store your data in a structured way. Not only do you get all of this, you also have access to the source code, because it's open source.

We will not be covering all the features of SQLite in this tutorial simply because there are too many to go through. Rest assured that you will have the ability to easily create a table structure to store data and retrieve it in your app. These are the concepts that we will be focusing on in this tutorial.

In the world of Xamarin.Forms, SQLite is a natural fit for a very simple reason. The SQLite engine is readily available on both iOS and Android. This means that you can use this technology right out of the box when you choose to write a Xamarin.Forms app.

Getting access to SQLite functionality in Windows Phone apps requires one additional step that we will go over a little later. All this functionality and cross platform accessibility is great, but how will we get access to the native platform implementations from our C# code in Xamarin.Forms? From a nice NuGet package, that's how. Let's take a look.

3. Creating an App

Let's start by creating a simple Xamarin.Forms application. In this tutorial, I will be using a Mac running Xamarin Studio, but you can just as easily be using Xamarin Studio or Visual Studio running on a PC.

Step 1: Create a Project

We start the process by creating a new Xamarin.Forms app. To do this, simply select the Mobile Apps project template family on the left and choose one of the Xamarin.Forms templates on the right. You can use either the PCL or Shared version of the template, but for this case, I will be using the PCL. You can follow along using either one, but there will be a slight difference if you choose the Shared template later on.

You can give the project any name you like. I will call this project IntroToSQLite. After you click the OK button, your IDE will go through the process of creating your solution. Your solution will contain four projects:

  1. IntroToSQLite - PCL project
  2. IntroToSQLite.Android - Android project
  3. IntroToSQLite.iOS - iOS project
  4. IntroToSQLite.WinPhone - Windows Phone project (only on a PC)

Step 2: Add SQLite Support

Now that we have our basic project structure set up, we can start to add access to SQLite to our PCL project. We need to install a new package into our project named SQLite.Net. This is a .NET wrapper around SQLite that will allow us to access the native SQLite functionality from a Xamarin.Forms PCL or Shared project.

We access this NuGet package by right-clicking on either Packages or References, depending on which IDE you are using, and select Add Package (or Reference). In the search box, type sqlite.net. This will show you a rather large collection of packages that you can include in your project.

Add the SQLite NuGet package

Since I chose to go the PCL route for my Xamarin.Forms project, I will need to select the SQLite.Net PCL package to include into my project. Which one do you choose if you went the Shared project route? None.

SQLite and Shared Projects

If you've chosen the Shared project template earlier in the tutorial, you may be wondering how to get access to the SQLite package. The short answer is that you can't. If you remember from a previous tutorial, you can't add references to a Shared project. To get access to SQLite from a Shared project, you simply add the source code to the project.

Add Some Code

The final step in adding SQLite functionality to the PCL project is to create an interface that will allow us access into the SQLite world. The reason we are doing this is because we need to access the native functionality on the different platforms as we saw in a previous tutorial.

Let's start by defining an interface that is going to give us access to the SQLite database connection. Within your PCL project, create a new interface named ISQLite and replace the implementation with the following:

This is the interface that we will implement and get access to via the DependencyService from the native implementations.

Step 3: Define the Database

We now have access to the SQLite functionality, let's define our database. This particular application is going to be quite simple and we are just going to store some of our random thoughts as we come up with them.

We start by creating a class that will represent the data stored in a particular table. Let's call this class RandomThought.

As you can see, this is a very simple class with three properties. Two of those properties are just your normal everyday properties, Thought and CreatedOn. These two properties are going to represent columns in the SQLite database, which will contain a table named RandomThought. The third property, ID, is also going to represent a column within the table and contain a unique id that we can use to refer to a specific RandomThought row within the table.

The interesting thing about the ID property is that it is decorated with two attributes, PrimaryKey and AutoIncrement. PrimaryKey tells SQLite that this column is going to be the primary key of the table, which means that, by default, it needs to be unique and there is an index applied to it to speed up retrievals from this table when referring to a row by this column.

AutoIncrement means that, when we insert a new RandomThought into this table, the ID column will be populated automatically with the next available integer value. The next step is to create this table in the database.

I like to create a class that represents my database and keep all the logic to access the database and its tables within this class. For this, I will create a class named RandomThoughtDatabase:

This is a very simple implementation as it only contains a few methods. These are typically some of the basic operations you perform when dealing with a database. One point of note is the constructor. Within the constructor we are doing two things.

First, we are using the DependencyService class to get a registered class that implements the ISQLite interface and call its GetConnection method.

Second, we use the CreateTable method on the SQLiteConnection class to create a table called RandomThought. This method will create the table, if it doesn't already exist, and exit gracefully if it already exists.

Obviously, you can get as sophisticated with this class as you want by adding all sorts of functionality, but these operations are typically a good starting point.

Step 4: Add the iOS Implementation

Most of the code that we use to interact with the database is going to be found in the PCL (or Shared) project. But we still need to do a little wiring up in the native implementations to get everything working correctly.

The main obstacle that we need to work around on the native side when using SQLite is where we are going to store the actual database file. This differs from platform to platform. Here is what we need for iOS.

Before we can actually add any sort of SQLite functionality to the iOS project, we need to add the SQLite.Net PCL as well as the SQLite.NET PCL - XamarinIOS Platform packages to this project. You can follow the same steps that you took in Step 2, making sure to add both to the project. Once you have added this package, you can start to write some SQLite code within the iOS project.

Let's create an implementation of the ISQLite interface for iOS. Start by creating a new class, naming it SQLite_iOS.

We get access to the correct location to store the database file, create a new SQLiteConnection object, and pass it back to our PCL (or Shared) project. The assembly attribute at the top of the file is used to identify this class as a Dependency that can be retrieved via the Get method on the DependencyService class.

Step 5: Add the Android Implementation

This step is very similar to the previous one. The only difference is that the code will change a little due to the fact that the location of the database file will be different. You will still need to add the appropriate packages to the Android project (SQLite.Net PCL and SQLite.NET PCL - XamarinAndroid) as you did before. Once you have completed that, you can add the appropriate code in a new class named SQLite_Android.

You now have a working implementation of the ISQLite interface from the perspective of your Android app.

Step 6: Add the Windows Phone Implementation

Since I am running this app from a Mac, I won't be creating the Windows Phone implementation, but if you would like to do this, you can.

The first step is to add support to your Windows Phone project for SQLite. As mentioned earlier, SQLite comes by default on iOS and Android. This is not true for Windows Phone, but it is supported. To get it installed, you can follow the instructions found on the Xamarin website.

After installing SQLite, the process of adding the functionality for Windows Phone will be almost exactly the same, except that the packages to install are SQLite.Net PCL and SQLite.Net PCL - WindowsPhone 8 Platform. With these packages installed, you can create the Windows Phone implementation of the ISQLite interface.

There you have it. Now you have all of your native implementations complete. It's time to give this app a user interface and get your data into the database.

Step 7: Adding the User Interface

Since this tutorial is well into the topic of Xamarin.Forms, I'm going to assume that you at least have a basic working knowledge of Xamarin.Forms. With this assumption in mind, I'm not going to go into a lot of detail on the process of creating the user interface. If you need more background information on Xamarin.Forms, check out my other Xamarin.Forms tutorials on Tuts+.

The user interface is going to consist of two separate pages. The first page will contain a list of all the thoughts we have entered in a list while the second page will let the user enter a new thought. Let's build these pages.

Create the ListView

We will first focus on creating the first page that will contain a list of RandomThought objects. Start by creating a new file in the PCL (or Shared) project and name it RandomThoughtsPage. Replace the default implementation with the following:

Most of the work done in this class is in the constructor. The constructor allows us to pass in an instance of the RandomThoughtsDatabase to get all the saved thoughts. We set the Title property of the page to "Random Thoughts", retrieve all the existing thoughts, create a new instance of a ListView, and create a ToolbarItem that will allow us to click a button to bring up the entry page. We haven't implemented that yet, but we will shortly.

To get our new RandomThoughtsPage up on the screen, we need to make a little modification to the App.cs file. Within this file, modify the GetMainPage method to look like the following:

The GetMainPage method now creates a new instance of our RandomThoughtDatabase class and returns a new instance of the RandomThoughtsPage. With this change, our iOS and Android apps should look something like this:

Random Thoughts page for iOS
Random Thoughts page for Android

Create the Entry Page

We now have a list page for all of our RandomThought objects, but we don't have a way to enter new ones. For that, we will create another page similar to the previous page. Create a new file in your PCL (or Shared) project and call it ThoughtEntryPage. Replace the default implementation with the following:

In this class, all the work is done within the constructor. We get a reference to the parent page, RandomThoughtsPage, as well as the database. The rest is basic setup code with an Entry object for entering text and a Button.

Once the user taps the Button, we use the database to add the new thought, dismiss the page, go back to the list page, and call the Refresh method to update the ListView. Once this is all wired up, we can run it to actually enter some values.

Entering Thoughts

Here is what it looks like on iOS and Android to enter some of your thoughts:

Adding thoughts on iOS
Adding thoughts on Android

Viewing the List

After you have entered a few thoughts, your list will look something like this:

Listing thoughts on iOS
Listing thoughts on Android

Conclusion

There you have it. You now have the ability to add database functionality to your Xamarin.Forms app to store and retrieve any sort of data with ease.

To continue your learning journey with Xamarin.Forms and SQLite, I give you the following challenge. See if you can enhance this application to enable deleting thoughts and update the list page in a similar fashion as the entry page. Good luck and happy coding.

2015-02-11T17:30:04.000Z2015-02-11T17:30:04.000ZDerek Jensen

An Introduction to Xamarin.Forms and SQLite

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-23020

At some point in your mobile development career, you are going to need to deal with data. Dealing with data means more than processing and displaying information to the end user. You are going to need to store this information somewhere and be able to get at it easily. Thanks to Xamarin and open source software, you can easily store your data with an industry tested platform, SQLite.

1. Storing Data

So why do you need to care about data when it comes to your app? Because it is all around you. You can't escape it. No matter what kind of app you are writing, whether it's a game or some sort of utility, you are going to need to store data at some point. That data could be user data, statistics, or anything else of interest that either you or the user will be interested in at some point in the use of your app.

At this point, let's assume that you have decided to go the Xamarin.Forms route, because you are interested in targeting several platforms not only in the logic for your app, but also for the user interface layer.

Great. But what do you do now that you need to store information within your app? Don't worry, there's a very simple solution to this problem, SQLite.

2. Introduction to SQLite

You have now seen the term SQLite a couple of times in this tutorial, it's time to get down to the meat. What exactly is SQLite? SQLite is a public domain, zero configuration, transactional SQL database engine. All this means is that you have a full featured mechanism to store your data in a structured way. Not only do you get all of this, you also have access to the source code, because it's open source.

We will not be covering all the features of SQLite in this tutorial simply because there are too many to go through. Rest assured that you will have the ability to easily create a table structure to store data and retrieve it in your app. These are the concepts that we will be focusing on in this tutorial.

In the world of Xamarin.Forms, SQLite is a natural fit for a very simple reason. The SQLite engine is readily available on both iOS and Android. This means that you can use this technology right out of the box when you choose to write a Xamarin.Forms app.

Getting access to SQLite functionality in Windows Phone apps requires one additional step that we will go over a little later. All this functionality and cross platform accessibility is great, but how will we get access to the native platform implementations from our C# code in Xamarin.Forms? From a nice NuGet package, that's how. Let's take a look.

3. Creating an App

Let's start by creating a simple Xamarin.Forms application. In this tutorial, I will be using a Mac running Xamarin Studio, but you can just as easily be using Xamarin Studio or Visual Studio running on a PC.

Step 1: Create a Project

We start the process by creating a new Xamarin.Forms app. To do this, simply select the Mobile Apps project template family on the left and choose one of the Xamarin.Forms templates on the right. You can use either the PCL or Shared version of the template, but for this case, I will be using the PCL. You can follow along using either one, but there will be a slight difference if you choose the Shared template later on.

You can give the project any name you like. I will call this project IntroToSQLite. After you click the OK button, your IDE will go through the process of creating your solution. Your solution will contain four projects:

  1. IntroToSQLite - PCL project
  2. IntroToSQLite.Android - Android project
  3. IntroToSQLite.iOS - iOS project
  4. IntroToSQLite.WinPhone - Windows Phone project (only on a PC)

Step 2: Add SQLite Support

Now that we have our basic project structure set up, we can start to add access to SQLite to our PCL project. We need to install a new package into our project named SQLite.Net. This is a .NET wrapper around SQLite that will allow us to access the native SQLite functionality from a Xamarin.Forms PCL or Shared project.

We access this NuGet package by right-clicking on either Packages or References, depending on which IDE you are using, and select Add Package (or Reference). In the search box, type sqlite.net. This will show you a rather large collection of packages that you can include in your project.

Add the SQLite NuGet package

Since I chose to go the PCL route for my Xamarin.Forms project, I will need to select the SQLite.Net PCL package to include into my project. Which one do you choose if you went the Shared project route? None.

SQLite and Shared Projects

If you've chosen the Shared project template earlier in the tutorial, you may be wondering how to get access to the SQLite package. The short answer is that you can't. If you remember from a previous tutorial, you can't add references to a Shared project. To get access to SQLite from a Shared project, you simply add the source code to the project.

Add Some Code

The final step in adding SQLite functionality to the PCL project is to create an interface that will allow us access into the SQLite world. The reason we are doing this is because we need to access the native functionality on the different platforms as we saw in a previous tutorial.

Let's start by defining an interface that is going to give us access to the SQLite database connection. Within your PCL project, create a new interface named ISQLite and replace the implementation with the following:

This is the interface that we will implement and get access to via the DependencyService from the native implementations.

Step 3: Define the Database

We now have access to the SQLite functionality, let's define our database. This particular application is going to be quite simple and we are just going to store some of our random thoughts as we come up with them.

We start by creating a class that will represent the data stored in a particular table. Let's call this class RandomThought.

As you can see, this is a very simple class with three properties. Two of those properties are just your normal everyday properties, Thought and CreatedOn. These two properties are going to represent columns in the SQLite database, which will contain a table named RandomThought. The third property, ID, is also going to represent a column within the table and contain a unique id that we can use to refer to a specific RandomThought row within the table.

The interesting thing about the ID property is that it is decorated with two attributes, PrimaryKey and AutoIncrement. PrimaryKey tells SQLite that this column is going to be the primary key of the table, which means that, by default, it needs to be unique and there is an index applied to it to speed up retrievals from this table when referring to a row by this column.

AutoIncrement means that, when we insert a new RandomThought into this table, the ID column will be populated automatically with the next available integer value. The next step is to create this table in the database.

I like to create a class that represents my database and keep all the logic to access the database and its tables within this class. For this, I will create a class named RandomThoughtDatabase:

This is a very simple implementation as it only contains a few methods. These are typically some of the basic operations you perform when dealing with a database. One point of note is the constructor. Within the constructor we are doing two things.

First, we are using the DependencyService class to get a registered class that implements the ISQLite interface and call its GetConnection method.

Second, we use the CreateTable method on the SQLiteConnection class to create a table called RandomThought. This method will create the table, if it doesn't already exist, and exit gracefully if it already exists.

Obviously, you can get as sophisticated with this class as you want by adding all sorts of functionality, but these operations are typically a good starting point.

Step 4: Add the iOS Implementation

Most of the code that we use to interact with the database is going to be found in the PCL (or Shared) project. But we still need to do a little wiring up in the native implementations to get everything working correctly.

The main obstacle that we need to work around on the native side when using SQLite is where we are going to store the actual database file. This differs from platform to platform. Here is what we need for iOS.

Before we can actually add any sort of SQLite functionality to the iOS project, we need to add the SQLite.Net PCL as well as the SQLite.NET PCL - XamarinIOS Platform packages to this project. You can follow the same steps that you took in Step 2, making sure to add both to the project. Once you have added this package, you can start to write some SQLite code within the iOS project.

Let's create an implementation of the ISQLite interface for iOS. Start by creating a new class, naming it SQLite_iOS.

We get access to the correct location to store the database file, create a new SQLiteConnection object, and pass it back to our PCL (or Shared) project. The assembly attribute at the top of the file is used to identify this class as a Dependency that can be retrieved via the Get method on the DependencyService class.

Step 5: Add the Android Implementation

This step is very similar to the previous one. The only difference is that the code will change a little due to the fact that the location of the database file will be different. You will still need to add the appropriate packages to the Android project (SQLite.Net PCL and SQLite.NET PCL - XamarinAndroid) as you did before. Once you have completed that, you can add the appropriate code in a new class named SQLite_Android.

You now have a working implementation of the ISQLite interface from the perspective of your Android app.

Step 6: Add the Windows Phone Implementation

Since I am running this app from a Mac, I won't be creating the Windows Phone implementation, but if you would like to do this, you can.

The first step is to add support to your Windows Phone project for SQLite. As mentioned earlier, SQLite comes by default on iOS and Android. This is not true for Windows Phone, but it is supported. To get it installed, you can follow the instructions found on the Xamarin website.

After installing SQLite, the process of adding the functionality for Windows Phone will be almost exactly the same, except that the packages to install are SQLite.Net PCL and SQLite.Net PCL - WindowsPhone 8 Platform. With these packages installed, you can create the Windows Phone implementation of the ISQLite interface.

There you have it. Now you have all of your native implementations complete. It's time to give this app a user interface and get your data into the database.

Step 7: Adding the User Interface

Since this tutorial is well into the topic of Xamarin.Forms, I'm going to assume that you at least have a basic working knowledge of Xamarin.Forms. With this assumption in mind, I'm not going to go into a lot of detail on the process of creating the user interface. If you need more background information on Xamarin.Forms, check out my other Xamarin.Forms tutorials on Tuts+.

The user interface is going to consist of two separate pages. The first page will contain a list of all the thoughts we have entered in a list while the second page will let the user enter a new thought. Let's build these pages.

Create the ListView

We will first focus on creating the first page that will contain a list of RandomThought objects. Start by creating a new file in the PCL (or Shared) project and name it RandomThoughtsPage. Replace the default implementation with the following:

Most of the work done in this class is in the constructor. The constructor allows us to pass in an instance of the RandomThoughtsDatabase to get all the saved thoughts. We set the Title property of the page to "Random Thoughts", retrieve all the existing thoughts, create a new instance of a ListView, and create a ToolbarItem that will allow us to click a button to bring up the entry page. We haven't implemented that yet, but we will shortly.

To get our new RandomThoughtsPage up on the screen, we need to make a little modification to the App.cs file. Within this file, modify the GetMainPage method to look like the following:

The GetMainPage method now creates a new instance of our RandomThoughtDatabase class and returns a new instance of the RandomThoughtsPage. With this change, our iOS and Android apps should look something like this:

Random Thoughts page for iOS
Random Thoughts page for Android

Create the Entry Page

We now have a list page for all of our RandomThought objects, but we don't have a way to enter new ones. For that, we will create another page similar to the previous page. Create a new file in your PCL (or Shared) project and call it ThoughtEntryPage. Replace the default implementation with the following:

In this class, all the work is done within the constructor. We get a reference to the parent page, RandomThoughtsPage, as well as the database. The rest is basic setup code with an Entry object for entering text and a Button.

Once the user taps the Button, we use the database to add the new thought, dismiss the page, go back to the list page, and call the Refresh method to update the ListView. Once this is all wired up, we can run it to actually enter some values.

Entering Thoughts

Here is what it looks like on iOS and Android to enter some of your thoughts:

Adding thoughts on iOS
Adding thoughts on Android

Viewing the List

After you have entered a few thoughts, your list will look something like this:

Listing thoughts on iOS
Listing thoughts on Android

Conclusion

There you have it. You now have the ability to add database functionality to your Xamarin.Forms app to store and retrieve any sort of data with ease.

To continue your learning journey with Xamarin.Forms and SQLite, I give you the following challenge. See if you can enhance this application to enable deleting thoughts and update the list page in a similar fashion as the entry page. Good luck and happy coding.

2015-02-11T17:30:04.000Z2015-02-11T17:30:04.000ZDerek Jensen

Rewriting History with Git Rebase

$
0
0

In the fundamental Git workflow, you develop a new feature in a dedicated topic branch, then merge it back into a production branch once it's finished. This makes git merge an integral tool for combining branches. However, it's not the only one that Git offers.

Combining branches by merging them together
Combining branches by merging them together

As an alternative to the above scenario, you could combine the branches with the git rebase command. Instead of tying the branches together with a merge commit, rebasing moves the entire feature branch to the tip of master as shown below.

Combining branches with git rebase
Combining branches with git rebase

This serves the same purpose as git merge, integrating commits from different branches. But there are two reasons why we might want to opt for a rebase over a merge:

  • It results in a linear project history.
  • It gives us the opportunity to clean up local commits.

In this tutorial, we'll explore these two common use cases of git rebase. Unfortunately, the benefits of git rebase come at a trade-off. When used incorrectly, it can be one of the most dangerous operations you can perform in a Git repository. So, we'll also be taking a careful look at the dangers of rebasing.

Prerequisites

This tutorial assumes that you're familiar with the basic Git commands and collaboration workflows. You should be comfortable staging and committing snapshots, developing features in isolated branches, merging branches together, and pushing/pulling branches to/from remote repositories.

1. Rebasing for a Linear History

The first use case we'll explore involves a divergent project history. Consider a repository where your production branch has moved forward while you were developing a feature:

Developing a feature on a feature branch

To rebase the feature branch onto the master branch, you would run the following commands:

This transplants the feature branch from its current location to the tip of the master branch:

Transplanting the feature branch to the tip of the master branch

There are two scenarios where you would want to do this. First, if the feature relied on the new commits in master, it would now have access to them. Second, if the feature was complete, it would now be set up for a fast-forward merge into master. In both cases, rebasing results in a linear history, whereas git merge would result in unnecessary merge commits.

For example, consider what would happen if you integrated the upstream commits with a merge instead of a rebase:

This would have given us an extra merge commit in the feature branch. What's more, this would happen every time you wanted to incorporate upstream commits into your feature. Eventually, your project history would be littered with meaningless merge commits.

Integrating upstream commits with a merge
Integrating upstream commits with a merge

This same benefit can be seen when merging in the other direction. Without a rebase, integrating the finished feature branch into master requires a merge commit. While this is actually a meaningful merge commit (in the sense that it represents a completed feature), the resulting history is full of forks:

Integrating a completed feature with a merge
Integrating a completed feature with a merge

When you rebase before merging, Git is able to fast-forward master to the tip of feature. You'll find a linear story of how your project has progressed in the git log output—the commits in feature are neatly grouped together on top of the commits in master. This is not necessarily the case when branches are tied together with a merge commit.

Rebasing before merging
Rebasing before merging

Resolving Conflicts

When you run git rebase, Git takes each commit in the branch and moves them, one-by-one, onto the new base. If any of these commits alter the same line(s) of code as the upstream commits, it will result in a conflict.

The git merge command lets you resolve all of the branch's conflicts at the end of the merge, which is one of the primary purposes of a merge commit. However, it works a little bit differently when you're rebasing. Conflicts are resolved on a per-commit basis. So, if git rebase finds a conflict, it will stop the rebase procedure and display a warning message:

Visually, this is what your project history looks like when git rebase encounters a conflict:

The conflicts can be inspected by running git status. The output looks very similar to a merge conflict:

To resolve the conflict, open up the conflicted file (readme.txt in the above example), find the affected lines, and manually edit them to the desired result. Then, tell Git that the conflict is resolved by staging the file:

Note that this is the exact same way you mark a git merge conflict as resolved. But remember that you're in the middle of a rebase—you don't want to forget about the rest of the commits that need to be moved. The last step is to tell Git to finish rebasing with the --continue option:

This will move the rest of the commits, one-by-one, and if any other conflicts arise, you'll have to repeat this process all over again.

If you don't want to resolve the conflict, you can opt for either the --skip or --abort flags. The latter is particularly useful if you have no idea what's going on and just want to get back to safety.

2. Rebasing to Clean Up Local Commits

So far, we've only been using git rebase to move branches, but it's much more powerful than that. By passing the -i flag, you can begin an interactive rebasing session. Interactive rebasing lets you define precisely how each commit will be moved to the new base. This gives you the opportunity to clean up a feature's history before sharing it with other developers.

For example, let's say you finished working on your feature branch and you're ready to integrate it into master. To begin an interactive rebasing session, run the following command:

This will open an editor containing all the commits in feature that are about to be moved:

This listing defines what the feature branch is going to look like after the rebase. Each line represents a commit and the pick command before each commit hash defines what's going to happen to it during the rebase. Note that the commits are listed from oldest to most recent. By altering this listing, you gain complete control over your project history.

If you want to change the order of the commits, simply reorder the lines. If you want to change a commit's message, use the reword command. If you want to combine two commits, change the pick command to squash. This will roll all of the changes in that commit into the one above it. For example, if you squashed the second commit in the above listing, the feature branch would look like the following after saving and closing the editor:

Squashing the 2nd commit with an interactive rebase
Squashing the 2nd commit with an interactive rebase

The edit command is particularly powerful. When it reaches the specified commit, Git will pause the rebase procedure, much like when it encounters a conflict. This gives you the opportunity to alter the contents of the commit with git commit --amend or even add more commits with the standard git add/git commit commands. Any new commits you add will be part of the new branch.

Interactive rebasing can have a profound impact on your development workflow. Instead of worrying about breaking up your changes into encapsulated commits, you can focus on writing your code. If you ended up committing what should be a single change into four separate snapshots, then that isn't a problem—rewrite history with git rebase -i and squash them all into one meaningful commit.

3. Dangers of Rebasing

Now that you have an understanding of git rebase, we can talk about when not to use it. Internally, rebasing doesn't actually move commits to a new branch. Instead, it creates brand new commits that contain the desired changes. With this is mind, rebasing is better visualized as the following:

After the rebase, the commits in feature will have different commit hashes. This means that we didn't just reposition a branch—we've literally rewritten our project history. This is a very important side effect of git rebase.

When you're working alone on a project, rewriting history isn't a big deal. However, as soon as you start working in a collaborative environment, it can become very dangerous. If you rewrite commits that other developers are using (e.g., commits on the master branch), it will look as if those commits vanished the next time they try to pull in your work. This results in a confusing scenario that's difficult to recover from.

With this is mind, you should never rebase commits that have been pushed to a public repository unless you're positive that nobody has based their work off of them.

Conclusion

This tutorial introduced the two most common use cases of git rebase. We talked a lot about moving branches around, but keep in mind that rebasing is really about controlling your project history. The power to rewrite commits after the fact frees you to focus on your development tasks instead of breaking down your work into isolated snapshots.

Note that rebasing is an entirely optional addition to your Git toolbox. You can still do everything you need to with plain old git merge commands. Indeed, this is safer as it avoids the possibility of rewriting public history. However, if you understand the risks, git rebase can be a much cleaner way to integrate branches compared to merging commits.

2015-02-13T17:45:24.000Z2015-02-13T17:45:24.000ZRyan Hodson

Rewriting History with Git Rebase

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-23191

In the fundamental Git workflow, you develop a new feature in a dedicated topic branch, then merge it back into a production branch once it's finished. This makes git merge an integral tool for combining branches. However, it's not the only one that Git offers.

Combining branches by merging them together
Combining branches by merging them together

As an alternative to the above scenario, you could combine the branches with the git rebase command. Instead of tying the branches together with a merge commit, rebasing moves the entire feature branch to the tip of master as shown below.

Combining branches with git rebase
Combining branches with git rebase

This serves the same purpose as git merge, integrating commits from different branches. But there are two reasons why we might want to opt for a rebase over a merge:

  • It results in a linear project history.
  • It gives us the opportunity to clean up local commits.

In this tutorial, we'll explore these two common use cases of git rebase. Unfortunately, the benefits of git rebase come at a trade-off. When used incorrectly, it can be one of the most dangerous operations you can perform in a Git repository. So, we'll also be taking a careful look at the dangers of rebasing.

Prerequisites

This tutorial assumes that you're familiar with the basic Git commands and collaboration workflows. You should be comfortable staging and committing snapshots, developing features in isolated branches, merging branches together, and pushing/pulling branches to/from remote repositories.

1. Rebasing for a Linear History

The first use case we'll explore involves a divergent project history. Consider a repository where your production branch has moved forward while you were developing a feature:

Developing a feature on a feature branch

To rebase the feature branch onto the master branch, you would run the following commands:

This transplants the feature branch from its current location to the tip of the master branch:

Transplanting the feature branch to the tip of the master branch

There are two scenarios where you would want to do this. First, if the feature relied on the new commits in master, it would now have access to them. Second, if the feature was complete, it would now be set up for a fast-forward merge into master. In both cases, rebasing results in a linear history, whereas git merge would result in unnecessary merge commits.

For example, consider what would happen if you integrated the upstream commits with a merge instead of a rebase:

This would have given us an extra merge commit in the feature branch. What's more, this would happen every time you wanted to incorporate upstream commits into your feature. Eventually, your project history would be littered with meaningless merge commits.

Integrating upstream commits with a merge
Integrating upstream commits with a merge

This same benefit can be seen when merging in the other direction. Without a rebase, integrating the finished feature branch into master requires a merge commit. While this is actually a meaningful merge commit (in the sense that it represents a completed feature), the resulting history is full of forks:

Integrating a completed feature with a merge
Integrating a completed feature with a merge

When you rebase before merging, Git is able to fast-forward master to the tip of feature. You'll find a linear story of how your project has progressed in the git log output—the commits in feature are neatly grouped together on top of the commits in master. This is not necessarily the case when branches are tied together with a merge commit.

Rebasing before merging
Rebasing before merging

Resolving Conflicts

When you run git rebase, Git takes each commit in the branch and moves them, one-by-one, onto the new base. If any of these commits alter the same line(s) of code as the upstream commits, it will result in a conflict.

The git merge command lets you resolve all of the branch's conflicts at the end of the merge, which is one of the primary purposes of a merge commit. However, it works a little bit differently when you're rebasing. Conflicts are resolved on a per-commit basis. So, if git rebase finds a conflict, it will stop the rebase procedure and display a warning message:

Visually, this is what your project history looks like when git rebase encounters a conflict:

The conflicts can be inspected by running git status. The output looks very similar to a merge conflict:

To resolve the conflict, open up the conflicted file (readme.txt in the above example), find the affected lines, and manually edit them to the desired result. Then, tell Git that the conflict is resolved by staging the file:

Note that this is the exact same way you mark a git merge conflict as resolved. But remember that you're in the middle of a rebase—you don't want to forget about the rest of the commits that need to be moved. The last step is to tell Git to finish rebasing with the --continue option:

This will move the rest of the commits, one-by-one, and if any other conflicts arise, you'll have to repeat this process all over again.

If you don't want to resolve the conflict, you can opt for either the --skip or --abort flags. The latter is particularly useful if you have no idea what's going on and just want to get back to safety.

2. Rebasing to Clean Up Local Commits

So far, we've only been using git rebase to move branches, but it's much more powerful than that. By passing the -i flag, you can begin an interactive rebasing session. Interactive rebasing lets you define precisely how each commit will be moved to the new base. This gives you the opportunity to clean up a feature's history before sharing it with other developers.

For example, let's say you finished working on your feature branch and you're ready to integrate it into master. To begin an interactive rebasing session, run the following command:

This will open an editor containing all the commits in feature that are about to be moved:

This listing defines what the feature branch is going to look like after the rebase. Each line represents a commit and the pick command before each commit hash defines what's going to happen to it during the rebase. Note that the commits are listed from oldest to most recent. By altering this listing, you gain complete control over your project history.

If you want to change the order of the commits, simply reorder the lines. If you want to change a commit's message, use the reword command. If you want to combine two commits, change the pick command to squash. This will roll all of the changes in that commit into the one above it. For example, if you squashed the second commit in the above listing, the feature branch would look like the following after saving and closing the editor:

Squashing the 2nd commit with an interactive rebase
Squashing the 2nd commit with an interactive rebase

The edit command is particularly powerful. When it reaches the specified commit, Git will pause the rebase procedure, much like when it encounters a conflict. This gives you the opportunity to alter the contents of the commit with git commit --amend or even add more commits with the standard git add/git commit commands. Any new commits you add will be part of the new branch.

Interactive rebasing can have a profound impact on your development workflow. Instead of worrying about breaking up your changes into encapsulated commits, you can focus on writing your code. If you ended up committing what should be a single change into four separate snapshots, then that isn't a problem—rewrite history with git rebase -i and squash them all into one meaningful commit.

3. Dangers of Rebasing

Now that you have an understanding of git rebase, we can talk about when not to use it. Internally, rebasing doesn't actually move commits to a new branch. Instead, it creates brand new commits that contain the desired changes. With this is mind, rebasing is better visualized as the following:

After the rebase, the commits in feature will have different commit hashes. This means that we didn't just reposition a branch—we've literally rewritten our project history. This is a very important side effect of git rebase.

When you're working alone on a project, rewriting history isn't a big deal. However, as soon as you start working in a collaborative environment, it can become very dangerous. If you rewrite commits that other developers are using (e.g., commits on the master branch), it will look as if those commits vanished the next time they try to pull in your work. This results in a confusing scenario that's difficult to recover from.

With this is mind, you should never rebase commits that have been pushed to a public repository unless you're positive that nobody has based their work off of them.

Conclusion

This tutorial introduced the two most common use cases of git rebase. We talked a lot about moving branches around, but keep in mind that rebasing is really about controlling your project history. The power to rewrite commits after the fact frees you to focus on your development tasks instead of breaking down your work into isolated snapshots.

Note that rebasing is an entirely optional addition to your Git toolbox. You can still do everything you need to with plain old git merge commands. Indeed, this is safer as it avoids the possibility of rewriting public history. However, if you understand the risks, git rebase can be a much cleaner way to integrate branches compared to merging commits.

2015-02-13T17:45:24.000Z2015-02-13T17:45:24.000ZRyan Hodson

An Introduction to Android TV

$
0
0

Do you want to get a better understanding of Android TV? Maybe you want to extend your existing Android projects to support this new platform, or maybe you have an idea for an Android TV app you want to develop.

Whatever your motivation, this article will introduce you to the Android TV platform, from what Android TV is and the characteristics of an effective TV app, right through to creating and testing your very own Android TV sample project.

1. What Is Android TV?

Announced at Google IO 2014, Android TV is the new smart TV platform from Google. Users can either purchase a TV with the new platform built in, or they can add Android TV to their existing television by purchasing a standalone set-top box, such as the Nexus Player.

Essentially, Android TV brings the apps and functionality users already enjoy on smaller Android devices to the big screen. Users can download Android TV apps from the familiar Google Play store, and the platform supports Google Cast, so users can cast content from their smartphone or tablet onto their Android TV device.

2. Designing for Android TV

If you have experience developing for Android smartphones or tablets, Android TV will feel immediately familiar, but there are some crucial differences you need to be aware of. This section covers the best practices that are unique to Android TV.

Deliver an Effective '10 Foot Experience'

According to the official Android TV documentation, the average TV viewer sits around 10 feet away from their screen, so all your onscreen content must be clearly visible from 10 feet away.

One trick for delivering an effective '10 foot experience' is to design a user interface that resizes automatically, depending on the size of the TV screen. This means using layout-relative sizing, such as fill_parent, rather than absolute sizing, and opting for density-independent pixel units rather than absolute pixel units.

You should also keep text to a minimum as text becomes more difficult to read at a distance. As much as possible, you should communicate with your users via other methods, such as voiceover, sound effects, video, and images.

If you do need to include text, make it easier to read by:

  • avoiding lightweight fonts
  • avoiding fonts that have very narrow or very broad strokes
  • using light text on dark backgrounds
  • breaking text into small chunks

Minimize and Simplify Interaction

Think about how you interact with your TV. You usually perform a few simple interactions to get to the content you want, whether that's changing the channel, booting up the DVD player, or launching your favorite content-streaming app.

You don’t expect to have to perform complicated interactions—and neither do Android TV users. If you want to hold the user's attention, your app needs to have the fewest possible screens between the app entry point and content immersion.

Even once the user is immersed in your app, you should keep interactions to a minimum and avoid any complicated interactions as your typical TV user has limited controls at their disposal —usually either a remote control, a game controller, or the official Android TV app installed on their smartphone or tablet.

Easy Navigation

TV controls tend to be restricted to a directional pad and a select button, so your challenge is to create an effective navigation scheme for your app, using these limited controls.

One trick is to use a view group (such as List View or Grid View) that automatically arranges your app's user interface elements into lists or grids, which are easy to navigate with a directional pad and a select button.

Your users should also to be able to tell at a glance which object is currently selected. You can highlight the currently selected object using visual cues, such as size, shadow, brightness, animation, and opacity.

Simple and Uncluttered

Android TV may give you more screen real estate to play with, but don't get carried away and try to fill every inch of space. A simple, uncluttered user interface isn't only more visually appealing, it's also easier to navigate—something that's particularly important considering the limited controls available to your typical Android TV user.

A user interface containing a few big, bold user interface elements is also going to provide a better '10 foot experience' than a screen filled with lots of smaller user interface elements.

Support Landscape Mode

All of your project's activities must support landscape orientation or your app won't appear to Android TV users in the Google Play store.

If you're developing an app that can also run on smartphones and tablets, be aware that if your project contains android:screenOrientation="portrait" the android.hardware.screen.portrait requirement is implicitly set to true. You need to specify that although your app supports portrait orientation where available, it can run on devices where portrait mode isn't supported (i.e. Android TV):

Allow for Overscan

To ensure there's never any blank space around the edges of your screen, TVs can clip the edges of content in a process known as overscan. Since you don't want to lose any important content to overscan, you should leave a margin around the edges of your app that's free from any user interface elements.

The v17 Leanback library automatically applies overscan-safe margins to your app. Alternatively, you can create your own overscan-safe margins by leaving 10% of blank space around the edges of your app. This translates to a 48dp margin around the left and right edges (android:layout_marginRight="48dp"), and 27dp along the top and bottom (android:layout_marginBottom="27dp").

Design for Android TV's Hardware Limitations

Android TVs don't have many of the hardware features typically available to other Android-powered devices. When you're developing for the Android TV platform, you can't use the following:

  • Near Field Communication (NFC)
  • GPS
  • Camera
  • Microphone
  • Touchscreen
  • Telephony

If you want your app to run on non-TV devices, such as smartphones and tablets, you can specify that although your app doesn't require these hardware features, it will use them where available, for example:

Also be aware that the following uses-permission manifest declarations imply hardware features that Android TV doesn't support:

  • RECORD_AUDIO
  • CAMERA
  • ACCESS_COARSE_LOCATION
  • ACCESS_FINE_LOCATION

3. Creating an Android TV Sample Project

In the final part of this tutorial, we'll take a first-hand look at some TV-ready code by creating and testing a basic Android TV project.

Before you can develop anything for the Android TV platform, make sure you've updated your SDK to Android 5.0 (API 21) or higher, and your SDK tools to version 24.0.0 or higher.

Once you're up to date, it's time to create your app:

  1. Launch Android Studio.
  2. Select Start a new Android Studio Project.
  3. Give your project a name and a domain. Click Next.
  4. Select TV, and then deselect all the other checkboxes. Although you can create Android TV projects that have a smartphone, tablet and/or Android Wear module, for the sake of simplicity we'll be creating a single-module project. Click Next.
Ensure only the TV checkbox is selected

5. Select Android TV Activity and click Next.

6. Stick to the default settings and click Finish.

Accept all the default activity layout and fragment names and click Finish

Android Studio will then create your project.

Your Android TV projects layout should feel instantly familiar

4. Breaking Down the Manifest

Now you've created your sample project, we'll take a line-by-line look at the Android Manifest, as this file contains lots of TV-specific code.

Note that although the majority of this code is generated automatically when you create an Android TV project, I've made some minor additions and adjustments that are all clearly marked in the text.

As already mentioned, the android.permission.RECORD_AUDIO permission implies that your app requires android.hardware.microphone. So this line of code effectively prevents your app from being installed on devices that don't have access to a microphone. If your app can function without a microphone, you should make the following addition:

Adding the above code means that users can install your app on devices that don't have access to microphone hardware.

While touchscreen support is found on many Android-powered devices, this isn't the case with Android TV. Your Android TV app must declare that it doesn't require touchscreen support.

The above snippet declares that your app uses the Leanback interface we discussed earlier.

If you want your app to run on non-TV devices where Leanback isn't supported, you'll need to change the above line to android:required="false".

The above addition applies the Leanback theme to your project.

App banners represent your app on the Android TV homescreen and are how the user launches your app. Your app banner must be a 320px x 180px xhdpi image and should include text.

If you want to use the same banner across all your activities, you should add the android:banner attribute to your manifest's <application> tag as I've done here. If you want to provide a different banner for each activity, you'll need to add an android:banner attribute to all of your application's <activity> tags instead.

This snippet declares a launcher activity for TV.

5. Testing the Sample App

The next step is testing how your app functions from the user perspective. Even if you have access to a physical Android TV, you'll want to test your project across multiple devices, so you should always create at least one Android TV AVD.

To create your AVD:

  1. Launch the AVD Manager, either by clicking the AVD Manager button in the toolbar or by selecting Tools > Android > AVD Manager.
  2. Click Create New Virtual Device.
  3. Select the TV category.
  4. Choose one of the Android TV devices listed and click Next.
  5. Select your system image and click Next.
  6. Give your AVD a name and click Finish.

To test your sample project, select Run> Run app, followed by your TV AVD. Once the AVD has finished loading, you'll see the Android TV user interface with your app's banner in the bottom-left corner.

Youll see your apps banner in the bottom-left corner of the Android TV user interface

To launch your app, click the banner image. After a short delay, your app will appear in the AVD window.

Your app is split into categories along the left hand side and content along the right

6. More to Explore

This article has given you an introduction to Android TV and has shown you how to create a sample app. If you want to explore more of the Android TV platform, you can continue developing your sample app by taking a look at the following areas:

  • BrowseFragment. In the sample app, each row of content corresponds to a category. This portion of the user interface is created with the BrowseFragment class. You can learn more about this fragment in the Creating a Catalog Browser section of the official Android TV docs.
  • DetailsFragment. Click any piece of content to see more information about that content. To expand this functionality, take a look at the DetailsFragment class in Building a Details View.

Click any piece of content to view more information about that content

  • SearchFragment. Despite the search icon in the app’s upper-left corner, the search function isn’t working in your sample app. To get it working you'll need to take a look at the SearchFragment class, which you can also find more information about in the Android TV documentation.

Conclusion

You should now have a better understanding of what developing for Android TV involves. There are a few caveats to watch out for, but developing for Android TV shouldn't be too difficult if you already have experience developing for Android.

2015-02-16T18:15:58.000Z2015-02-16T18:15:58.000ZJessica Thornsby

An Introduction to Android TV

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-23251

Do you want to get a better understanding of Android TV? Maybe you want to extend your existing Android projects to support this new platform, or maybe you have an idea for an Android TV app you want to develop.

Whatever your motivation, this article will introduce you to the Android TV platform, from what Android TV is and the characteristics of an effective TV app, right through to creating and testing your very own Android TV sample project.

1. What Is Android TV?

Announced at Google IO 2014, Android TV is the new smart TV platform from Google. Users can either purchase a TV with the new platform built in, or they can add Android TV to their existing television by purchasing a standalone set-top box, such as the Nexus Player.

Essentially, Android TV brings the apps and functionality users already enjoy on smaller Android devices to the big screen. Users can download Android TV apps from the familiar Google Play store, and the platform supports Google Cast, so users can cast content from their smartphone or tablet onto their Android TV device.

2. Designing for Android TV

If you have experience developing for Android smartphones or tablets, Android TV will feel immediately familiar, but there are some crucial differences you need to be aware of. This section covers the best practices that are unique to Android TV.

Deliver an Effective '10 Foot Experience'

According to the official Android TV documentation, the average TV viewer sits around 10 feet away from their screen, so all your onscreen content must be clearly visible from 10 feet away.

One trick for delivering an effective '10 foot experience' is to design a user interface that resizes automatically, depending on the size of the TV screen. This means using layout-relative sizing, such as fill_parent, rather than absolute sizing, and opting for density-independent pixel units rather than absolute pixel units.

You should also keep text to a minimum as text becomes more difficult to read at a distance. As much as possible, you should communicate with your users via other methods, such as voiceover, sound effects, video, and images.

If you do need to include text, make it easier to read by:

  • avoiding lightweight fonts
  • avoiding fonts that have very narrow or very broad strokes
  • using light text on dark backgrounds
  • breaking text into small chunks

Minimize and Simplify Interaction

Think about how you interact with your TV. You usually perform a few simple interactions to get to the content you want, whether that's changing the channel, booting up the DVD player, or launching your favorite content-streaming app.

You don’t expect to have to perform complicated interactions—and neither do Android TV users. If you want to hold the user's attention, your app needs to have the fewest possible screens between the app entry point and content immersion.

Even once the user is immersed in your app, you should keep interactions to a minimum and avoid any complicated interactions as your typical TV user has limited controls at their disposal —usually either a remote control, a game controller, or the official Android TV app installed on their smartphone or tablet.

Easy Navigation

TV controls tend to be restricted to a directional pad and a select button, so your challenge is to create an effective navigation scheme for your app, using these limited controls.

One trick is to use a view group (such as List View or Grid View) that automatically arranges your app's user interface elements into lists or grids, which are easy to navigate with a directional pad and a select button.

Your users should also to be able to tell at a glance which object is currently selected. You can highlight the currently selected object using visual cues, such as size, shadow, brightness, animation, and opacity.

Simple and Uncluttered

Android TV may give you more screen real estate to play with, but don't get carried away and try to fill every inch of space. A simple, uncluttered user interface isn't only more visually appealing, it's also easier to navigate—something that's particularly important considering the limited controls available to your typical Android TV user.

A user interface containing a few big, bold user interface elements is also going to provide a better '10 foot experience' than a screen filled with lots of smaller user interface elements.

Support Landscape Mode

All of your project's activities must support landscape orientation or your app won't appear to Android TV users in the Google Play store.

If you're developing an app that can also run on smartphones and tablets, be aware that if your project contains android:screenOrientation="portrait" the android.hardware.screen.portrait requirement is implicitly set to true. You need to specify that although your app supports portrait orientation where available, it can run on devices where portrait mode isn't supported (i.e. Android TV):

Allow for Overscan

To ensure there's never any blank space around the edges of your screen, TVs can clip the edges of content in a process known as overscan. Since you don't want to lose any important content to overscan, you should leave a margin around the edges of your app that's free from any user interface elements.

The v17 Leanback library automatically applies overscan-safe margins to your app. Alternatively, you can create your own overscan-safe margins by leaving 10% of blank space around the edges of your app. This translates to a 48dp margin around the left and right edges (android:layout_marginRight="48dp"), and 27dp along the top and bottom (android:layout_marginBottom="27dp").

Design for Android TV's Hardware Limitations

Android TVs don't have many of the hardware features typically available to other Android-powered devices. When you're developing for the Android TV platform, you can't use the following:

  • Near Field Communication (NFC)
  • GPS
  • Camera
  • Microphone
  • Touchscreen
  • Telephony

If you want your app to run on non-TV devices, such as smartphones and tablets, you can specify that although your app doesn't require these hardware features, it will use them where available, for example:

Also be aware that the following uses-permission manifest declarations imply hardware features that Android TV doesn't support:

  • RECORD_AUDIO
  • CAMERA
  • ACCESS_COARSE_LOCATION
  • ACCESS_FINE_LOCATION

3. Creating an Android TV Sample Project

In the final part of this tutorial, we'll take a first-hand look at some TV-ready code by creating and testing a basic Android TV project.

Before you can develop anything for the Android TV platform, make sure you've updated your SDK to Android 5.0 (API 21) or higher, and your SDK tools to version 24.0.0 or higher.

Once you're up to date, it's time to create your app:

  1. Launch Android Studio.
  2. Select Start a new Android Studio Project.
  3. Give your project a name and a domain. Click Next.
  4. Select TV, and then deselect all the other checkboxes. Although you can create Android TV projects that have a smartphone, tablet and/or Android Wear module, for the sake of simplicity we'll be creating a single-module project. Click Next.
Ensure only the TV checkbox is selected

5. Select Android TV Activity and click Next.

6. Stick to the default settings and click Finish.

Accept all the default activity layout and fragment names and click Finish

Android Studio will then create your project.

Your Android TV projects layout should feel instantly familiar

4. Breaking Down the Manifest

Now you've created your sample project, we'll take a line-by-line look at the Android Manifest, as this file contains lots of TV-specific code.

Note that although the majority of this code is generated automatically when you create an Android TV project, I've made some minor additions and adjustments that are all clearly marked in the text.

As already mentioned, the android.permission.RECORD_AUDIO permission implies that your app requires android.hardware.microphone. So this line of code effectively prevents your app from being installed on devices that don't have access to a microphone. If your app can function without a microphone, you should make the following addition:

Adding the above code means that users can install your app on devices that don't have access to microphone hardware.

While touchscreen support is found on many Android-powered devices, this isn't the case with Android TV. Your Android TV app must declare that it doesn't require touchscreen support.

The above snippet declares that your app uses the Leanback interface we discussed earlier.

If you want your app to run on non-TV devices where Leanback isn't supported, you'll need to change the above line to android:required="false".

The above addition applies the Leanback theme to your project.

App banners represent your app on the Android TV homescreen and are how the user launches your app. Your app banner must be a 320px x 180px xhdpi image and should include text.

If you want to use the same banner across all your activities, you should add the android:banner attribute to your manifest's <application> tag as I've done here. If you want to provide a different banner for each activity, you'll need to add an android:banner attribute to all of your application's <activity> tags instead.

This snippet declares a launcher activity for TV.

5. Testing the Sample App

The next step is testing how your app functions from the user perspective. Even if you have access to a physical Android TV, you'll want to test your project across multiple devices, so you should always create at least one Android TV AVD.

To create your AVD:

  1. Launch the AVD Manager, either by clicking the AVD Manager button in the toolbar or by selecting Tools > Android > AVD Manager.
  2. Click Create New Virtual Device.
  3. Select the TV category.
  4. Choose one of the Android TV devices listed and click Next.
  5. Select your system image and click Next.
  6. Give your AVD a name and click Finish.

To test your sample project, select Run> Run app, followed by your TV AVD. Once the AVD has finished loading, you'll see the Android TV user interface with your app's banner in the bottom-left corner.

Youll see your apps banner in the bottom-left corner of the Android TV user interface

To launch your app, click the banner image. After a short delay, your app will appear in the AVD window.

Your app is split into categories along the left hand side and content along the right

6. More to Explore

This article has given you an introduction to Android TV and has shown you how to create a sample app. If you want to explore more of the Android TV platform, you can continue developing your sample app by taking a look at the following areas:

  • BrowseFragment. In the sample app, each row of content corresponds to a category. This portion of the user interface is created with the BrowseFragment class. You can learn more about this fragment in the Creating a Catalog Browser section of the official Android TV docs.
  • DetailsFragment. Click any piece of content to see more information about that content. To expand this functionality, take a look at the DetailsFragment class in Building a Details View.

Click any piece of content to view more information about that content

  • SearchFragment. Despite the search icon in the app’s upper-left corner, the search function isn’t working in your sample app. To get it working you'll need to take a look at the SearchFragment class, which you can also find more information about in the Android TV documentation.

Conclusion

You should now have a better understanding of what developing for Android TV involves. There are a few caveats to watch out for, but developing for Android TV shouldn't be too difficult if you already have experience developing for Android.

2015-02-16T18:15:58.000Z2015-02-16T18:15:58.000ZJessica Thornsby
Viewing all 1836 articles
Browse latest View live