Quantcast
Channel: Envato Tuts+ Code - Mobile Development
Viewing all 1836 articles
Browse latest View live

What's New in iOS 10? Find Out in This Short Course

$
0
0

Want to start the New Year by getting up to date with all the latest changes in iOS 10? Our new short course, What's New in iOS 10, has you covered. In just 38 minutes, you'll get fully up to speed with all the latest features, such as haptic feedback, sticker packs, and working with the new features of SiriKit.

Example of ride sharing app in iOS 10

What You’ll Learn

With every new version of iOS, Apple introduces a bunch of new features and enhancements to the developer experience. These are especially exciting to the mobile development community, because they create whole new possibilities for the kinds of app we can code for our users.

Generating haptic feedback in iOS 10

In this course, Envato Tuts+ instructor Markus Mühlberger will show you some of the coolest new features for developers in iOS 10. You'll learn how to make a sticker pack, create a custom extension, provide haptic feedback, and even have a conversation with your users with SiriKit!

Watch the Introduction

 

Take the Course

You can take our new course straight away with a free 10-day trial of our monthly subscription. If you decide to continue, it costs just $15 a month, and you’ll get access to hundreds of courses, with new ones added every week.

Not up to speed with Swift yet? Watch our comprehensive course Up and Running With Swift 2 or check out these other Swift courses:

You can also find a ton of useful iOS developer resources on Envato Market.

2017-01-05T18:00:48.000Z2017-01-05T18:00:48.000ZAndrew Blackman

What's New in iOS 10? Find Out in This Short Course

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-27949

Want to start the New Year by getting up to date with all the latest changes in iOS 10? Our new short course, What's New in iOS 10, has you covered. In just 38 minutes, you'll get fully up to speed with all the latest features, such as haptic feedback, sticker packs, and working with the new features of SiriKit.

Example of ride sharing app in iOS 10

What You’ll Learn

With every new version of iOS, Apple introduces a bunch of new features and enhancements to the developer experience. These are especially exciting to the mobile development community, because they create whole new possibilities for the kinds of app we can code for our users.

Generating haptic feedback in iOS 10

In this course, Envato Tuts+ instructor Markus Mühlberger will show you some of the coolest new features for developers in iOS 10. You'll learn how to make a sticker pack, create a custom extension, provide haptic feedback, and even have a conversation with your users with SiriKit!

Watch the Introduction

 

Take the Course

You can take our new course straight away with a free 10-day trial of our monthly subscription. If you decide to continue, it costs just $15 a month, and you’ll get access to hundreds of courses, with new ones added every week.

Not up to speed with Swift yet? Watch our comprehensive course Up and Running With Swift 2 or check out these other Swift courses:

You can also find a ton of useful iOS developer resources on Envato Market.

2017-01-05T18:00:48.000Z2017-01-05T18:00:48.000ZAndrew Blackman

Create SiriKit Extensions in iOS 10

$
0
0
Final product image
What You'll Be Creating

Introduction

Since Siri was introduced back in 2011, iOS developers have been asking for the possibility to integrate third-party apps with it. With the release of iOS 10 during WWDC 2016, Apple finally made SiriKit available to developers. There are still some restrictions on which types of applications can take advantage of Siri, but it's a step in the right direction. Let's take a look at what we can do with Siri.

For more on SiriKit and the other new features for developers in iOS 10, check out Markus Mühlberger's course, right here on Envato Tuts+.

Supported Domains

To make use of SiriKit, your app has to be in one or more of the following domains:

  • VoIP calling (for example, Skype)
  • Messaging (WhatsApp)
  • Payments (Square, PayPal)
  • Photo (Photos)
  • Workouts (Runtastic)
  • Ride booking (Uber, Lyft)
  • CarPlay (automotive vendors only)
  • Restaurant reservations (requires additional support from Apple)

If your app doesn't belong to any of these categories, unfortunately you cannot use Siri in your app at this moment. Don't leave just yet, though, because SiriKit is very powerful, and it may gain new capabilities in the future!

Extension Architecture

A SiriKit extension is in reality composed of two types of extension. An Intents extension is required and takes care of handling the requests by the user and executing a specific task in your app (such as starting a call, sending a message, etc.).

On the other hand, an IntentsUI extension is not mandatory. You should only create one if you want to customize the user interface that Siri shows when presenting your data. If you don't do this, the standard Siri interface will be displayed. We are going to take a look at both types of extension in this tutorial.

For your information, during WWDC 2016 Apple released two very interesting videos about SiriKit. You may want to check them out:

Example Project

We are going to build a simple app that processes payments via Siri. The goal is to successfully process the sentence "Send $20 to Patrick via TutsplusPayments". The format of the sentence consists of an amount of money with a specific currency, the name of the payee, and the app to use to complete the transaction. We are later going to analyze the payment intent in more detail.

Initial Setup

Let's start by creating a standard Xcode project in Swift and giving it a name. There are a few mandatory steps that you have to do before writing any code to enable your app to use Siri's APIs.

1. Select your Target > Capabilities and enable the Siri capability. Make sure that the entitlements were created successfully in your project structure.

Xcode project capabilities view

2. Open your app's Info.plist and add the key NSSiriUsageDescription. The value must be a string explaining your usage of Siri that will be shown to the user when asked for the initial permission.

3. Select File > New > Target. In the new window presented by Xcode, under Application Extensions, choose Intents Extension. Also select the option to include a UI Extension. This will save you from later having to create another separate extension.

New Intents extension target

In the Info.plist file of your newly created Intents target, fully expand the NSExtension dictionary to study its contents. The dictionary describes in more detail which intents your extension supports and if you want to allow the user to invoke an intent while the device is locked. 

Insert the most relevant intents at the top if you want to support more than one. Siri uses this order to figure out which one the user wants to use in case of ambiguity.

We now need to define which intents we want to support. In this example, we are going to build an extension that supports the payment intent. Modify the Info.plist file to match the following picture.

Intents Info plist file

Here we specify that we want to handle the INSendPaymentIntent and that we require the device to be unlocked. We don't want strangers to send payments when the device is lost or stolen!

iOS Target

The next step actually involves writing some code in the iOS app. We have to ask the user for permission to send their voice to Apple for analysis. We simply have to import the Intents framework and call the appropriate method as follows:

The resulting dialog presented to the user during the first launch of the app will look like this.

Alert that asks permission to access Siri

This is all we have to do in our simple iOS app. Let's get into the extensions world now!

Intents Extension

Switch to the Intents extension that we created earlier. Expand its contents in the Xcode project navigator. You'll see just one file named IntentHandler.swift.

This file is the entry point of your extension and is used to handle any intents that Siri sends you. Siri will forward to the handler(for:) method all the intents in case your extension supports multiple types. It's your job to check the type of the INIntent object and handle it appropriately.

The IntentHandler.swift template already contains an example implementation of a Messaging intent. Replace all the code with the following empty method so that we can walk together through each step.

Each intent has an associated protocol to make sure a class implements all the required methods. Most of the protocols in the Intents framework have the same structure.

The protocol that we are going to implement is called INSendPaymentIntentHandling. This protocol contains the following required and optional methods:

  • Required:
    handle(sendPayment:completion:) 
  • Optional:
    confirm(sendPayment:completion:) 
    resolvePayee(forSendPayment:with:)

    resolveCurrencyAmount(forSendPayment:with:)

    resolveNote(forSendPayment:with:)

Let's create an extension of the IntentHandler class in the same Swift file to implement the only required method.

This is a very basic implementation. We make sure there is a valid payee and currencyAmount to set the transaction as successful. You may not believe it, but it works already! Select the Intents scheme from Xcode and run it. When Xcode presents the usual menu to choose an app to run, select Siri.

Run the extension in Xcodes menu

When Siri starts, try to say, "Send $20 to Patrick via TutsplusPayments". Now enjoy your first successful payment completed with your voice!

First successful payment via Siri

You can also try to test the failing case. Try to say the same sentence as before but without specifying the payee (i.e: "Send $20 via TutsplusPayments"). You'll see that Siri will fail and present the user a button to continue the payment in your app.

Payment failed due to missing payee name

In case Siri does not understand or is not provided with one of the optional parameters but you require a valid value, you can implement one of the resolve methods. Those methods present the user an option to give more details about the payment such as the name of payee, the exact currency amount, and even a note. With this smart architecture of the API, you as developer are presented with the possibility to easily and clearly understand your user's request in different ways.

In a real-world application, you would create a dynamic framework that is shared between your iOS app and extensions. By making use of this architecture, you can share the same business logic in multiple targets. You won't need to implement it multiple times, but just once and for all targets!

Intents UI Extension

In the last part of this tutorial, I am going to show you how you can customize the user interface that Siri displays.

First of all, remember to set the correct intent class that you want to handle in the Info.plist of the ExtensionUI, as we did in the previous section.

Jump into the Intents UI extension and you'll see the template that Xcode has created for you. It contains an IntentViewController, which is a simple subclass of UIViewController that implements the INUIHostedViewControlling protocol. A Storyboard file was also created for you; open it so that we can start customizing the user interface.

Add a UIImageView as the background and a label in the center. Download the background image, import it into the Intents UI target, and set it as the image of the newly created UIImageView. Create a UILabel object and position it at the center of the view. You can easily use AutoLayout to set up the constraints in Storyboard.

Open the Assistant Editor and create an @IBOutlet for your label and call it contentLabel. The result should look like something like this:

Storyboard showing the new IntentsUI view

Open the IntentViewController file and you'll see a bunch of example code. You can remove everything except the configure(with:context:completion:) method that we are going to implement now. This method is called when the user interface is ready to be configured. What we have to do here is set the content of the UILabel.

First of all, we check that the intent object is of type INSendPaymentIntent. If it is, we also make sure that all the properties that we want to display are not nil, otherwise we simply call the completion block with a size of zero to hide our custom view. If everything goes as we expect, we create a custom string with the data that we want to show to the user and set it as the text of the contentLabel.

Run the extension again and you'll see the new view inside Siri!

New custom UI in Siri extension

Siri still shows the default view. We can hide it by making our view controller conform to the INUIHostedViewSiriProviding protocol.

By returning true in the displaysPaymentTransaction variable, we tell Siri that our view controller is taking care of displaying all the necessary information to the user and that the default view can be hidden. The result is much cleaner now!

Final result of the custom UI displayed by Siri

Note: As you can see in this picture, when specifying a different currency than US dollars, Siri correctly understands and returns the currency code to the extension. Unfortunately, the text always displays US dollars. I have reported this bug to Apple!

Conclusion

I hope you have enjoyed this tutorial. Siri is very powerful even if limited to some types of applications at the moment. If you plan to implement it in your own apps, make sure to market it well to your users because they may not be aware of how cool and advanced your app has become!

If you want to learn more about integrating Siri in your app, or if you want to find out about some of the other cool developer features of iOS 10, check out Markus Mühlberger's course.

Also, check out some of our other free tutorials on iOS 10 features.

2017-01-06T14:59:12.000Z2017-01-06T14:59:12.000ZPatrick Balestra

Create SiriKit Extensions in iOS 10

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-27924
Final product image
What You'll Be Creating

Introduction

Since Siri was introduced back in 2011, iOS developers have been asking for the possibility to integrate third-party apps with it. With the release of iOS 10 during WWDC 2016, Apple finally made SiriKit available to developers. There are still some restrictions on which types of applications can take advantage of Siri, but it's a step in the right direction. Let's take a look at what we can do with Siri.

For more on SiriKit and the other new features for developers in iOS 10, check out Markus Mühlberger's course, right here on Envato Tuts+.

Supported Domains

To make use of SiriKit, your app has to be in one or more of the following domains:

  • VoIP calling (for example, Skype)
  • Messaging (WhatsApp)
  • Payments (Square, PayPal)
  • Photo (Photos)
  • Workouts (Runtastic)
  • Ride booking (Uber, Lyft)
  • CarPlay (automotive vendors only)
  • Restaurant reservations (requires additional support from Apple)

If your app doesn't belong to any of these categories, unfortunately you cannot use Siri in your app at this moment. Don't leave just yet, though, because SiriKit is very powerful, and it may gain new capabilities in the future!

Extension Architecture

A SiriKit extension is in reality composed of two types of extension. An Intents extension is required and takes care of handling the requests by the user and executing a specific task in your app (such as starting a call, sending a message, etc.).

On the other hand, an IntentsUI extension is not mandatory. You should only create one if you want to customize the user interface that Siri shows when presenting your data. If you don't do this, the standard Siri interface will be displayed. We are going to take a look at both types of extension in this tutorial.

For your information, during WWDC 2016 Apple released two very interesting videos about SiriKit. You may want to check them out:

Example Project

We are going to build a simple app that processes payments via Siri. The goal is to successfully process the sentence "Send $20 to Patrick via TutsplusPayments". The format of the sentence consists of an amount of money with a specific currency, the name of the payee, and the app to use to complete the transaction. We are later going to analyze the payment intent in more detail.

Initial Setup

Let's start by creating a standard Xcode project in Swift and giving it a name. There are a few mandatory steps that you have to do before writing any code to enable your app to use Siri's APIs.

1. Select your Target > Capabilities and enable the Siri capability. Make sure that the entitlements were created successfully in your project structure.

Xcode project capabilities view

2. Open your app's Info.plist and add the key NSSiriUsageDescription. The value must be a string explaining your usage of Siri that will be shown to the user when asked for the initial permission.

3. Select File > New > Target. In the new window presented by Xcode, under Application Extensions, choose Intents Extension. Also select the option to include a UI Extension. This will save you from later having to create another separate extension.

New Intents extension target

In the Info.plist file of your newly created Intents target, fully expand the NSExtension dictionary to study its contents. The dictionary describes in more detail which intents your extension supports and if you want to allow the user to invoke an intent while the device is locked. 

Insert the most relevant intents at the top if you want to support more than one. Siri uses this order to figure out which one the user wants to use in case of ambiguity.

We now need to define which intents we want to support. In this example, we are going to build an extension that supports the payment intent. Modify the Info.plist file to match the following picture.

Intents Info plist file

Here we specify that we want to handle the INSendPaymentIntent and that we require the device to be unlocked. We don't want strangers to send payments when the device is lost or stolen!

iOS Target

The next step actually involves writing some code in the iOS app. We have to ask the user for permission to send their voice to Apple for analysis. We simply have to import the Intents framework and call the appropriate method as follows:

The resulting dialog presented to the user during the first launch of the app will look like this.

Alert that asks permission to access Siri

This is all we have to do in our simple iOS app. Let's get into the extensions world now!

Intents Extension

Switch to the Intents extension that we created earlier. Expand its contents in the Xcode project navigator. You'll see just one file named IntentHandler.swift.

This file is the entry point of your extension and is used to handle any intents that Siri sends you. Siri will forward to the handler(for:) method all the intents in case your extension supports multiple types. It's your job to check the type of the INIntent object and handle it appropriately.

The IntentHandler.swift template already contains an example implementation of a Messaging intent. Replace all the code with the following empty method so that we can walk together through each step.

Each intent has an associated protocol to make sure a class implements all the required methods. Most of the protocols in the Intents framework have the same structure.

The protocol that we are going to implement is called INSendPaymentIntentHandling. This protocol contains the following required and optional methods:

  • Required:
    handle(sendPayment:completion:) 
  • Optional:
    confirm(sendPayment:completion:) 
    resolvePayee(forSendPayment:with:)

    resolveCurrencyAmount(forSendPayment:with:)

    resolveNote(forSendPayment:with:)

Let's create an extension of the IntentHandler class in the same Swift file to implement the only required method.

This is a very basic implementation. We make sure there is a valid payee and currencyAmount to set the transaction as successful. You may not believe it, but it works already! Select the Intents scheme from Xcode and run it. When Xcode presents the usual menu to choose an app to run, select Siri.

Run the extension in Xcodes menu

When Siri starts, try to say, "Send $20 to Patrick via TutsplusPayments". Now enjoy your first successful payment completed with your voice!

First successful payment via Siri

You can also try to test the failing case. Try to say the same sentence as before but without specifying the payee (i.e: "Send $20 via TutsplusPayments"). You'll see that Siri will fail and present the user a button to continue the payment in your app.

Payment failed due to missing payee name

In case Siri does not understand or is not provided with one of the optional parameters but you require a valid value, you can implement one of the resolve methods. Those methods present the user an option to give more details about the payment such as the name of payee, the exact currency amount, and even a note. With this smart architecture of the API, you as developer are presented with the possibility to easily and clearly understand your user's request in different ways.

In a real-world application, you would create a dynamic framework that is shared between your iOS app and extensions. By making use of this architecture, you can share the same business logic in multiple targets. You won't need to implement it multiple times, but just once and for all targets!

Intents UI Extension

In the last part of this tutorial, I am going to show you how you can customize the user interface that Siri displays.

First of all, remember to set the correct intent class that you want to handle in the Info.plist of the ExtensionUI, as we did in the previous section.

Jump into the Intents UI extension and you'll see the template that Xcode has created for you. It contains an IntentViewController, which is a simple subclass of UIViewController that implements the INUIHostedViewControlling protocol. A Storyboard file was also created for you; open it so that we can start customizing the user interface.

Add a UIImageView as the background and a label in the center. Download the background image, import it into the Intents UI target, and set it as the image of the newly created UIImageView. Create a UILabel object and position it at the center of the view. You can easily use AutoLayout to set up the constraints in Storyboard.

Open the Assistant Editor and create an @IBOutlet for your label and call it contentLabel. The result should look like something like this:

Storyboard showing the new IntentsUI view

Open the IntentViewController file and you'll see a bunch of example code. You can remove everything except the configure(with:context:completion:) method that we are going to implement now. This method is called when the user interface is ready to be configured. What we have to do here is set the content of the UILabel.

First of all, we check that the intent object is of type INSendPaymentIntent. If it is, we also make sure that all the properties that we want to display are not nil, otherwise we simply call the completion block with a size of zero to hide our custom view. If everything goes as we expect, we create a custom string with the data that we want to show to the user and set it as the text of the contentLabel.

Run the extension again and you'll see the new view inside Siri!

New custom UI in Siri extension

Siri still shows the default view. We can hide it by making our view controller conform to the INUIHostedViewSiriProviding protocol.

By returning true in the displaysPaymentTransaction variable, we tell Siri that our view controller is taking care of displaying all the necessary information to the user and that the default view can be hidden. The result is much cleaner now!

Final result of the custom UI displayed by Siri

Note: As you can see in this picture, when specifying a different currency than US dollars, Siri correctly understands and returns the currency code to the extension. Unfortunately, the text always displays US dollars. I have reported this bug to Apple!

Conclusion

I hope you have enjoyed this tutorial. Siri is very powerful even if limited to some types of applications at the moment. If you plan to implement it in your own apps, make sure to market it well to your users because they may not be aware of how cool and advanced your app has become!

If you want to learn more about integrating Siri in your app, or if you want to find out about some of the other cool developer features of iOS 10, check out Markus Mühlberger's course.

Also, check out some of our other free tutorials on iOS 10 features.

2017-01-06T14:59:12.000Z2017-01-06T14:59:12.000ZPatrick Balestra

Firebase Security Rules

$
0
0

Firebase Realtime Database security rules are how you secure your data from unauthorised users and protect your data structure.  

In this quick tip tutorial, I will explain how to configure your database security rules properly so that only authorised users have read or write access to data. I'll also show you how to structure your data to make it easy to secure.

The Problem

Let's assume we have JSON data in our Firebase database, as in the example below:

Looking at the database, you can see that there are some issues with our data:

  1. Two users (user1 and user3) have the same phone numbers. We'd like these to be unique.
  2. user3 has a number for last name, instead of a string.
  3. user2 has only seven digits in their phone number, instead of 11. 
  4. The age value for user1 and user2 is a string, while that of user3 is a number.

With all these flaws highlighted in our data, we have lost data integrity. In the following steps, I will show you how to prevent these from occurring. 

Permissive Rules

The Firebase realtime database has the following rule types:

TypeFunction
.readDescribes if and when data is allowed to be read by users.
.writeDescribe if and when data is allowed to be written.
.validateDefines what a correctly formatted value will look like, whether it has child attributes, and the data type.
.indexOnSpecifies a child to index to support ordering and querying.

Read more about them in the Firebase docs.

Here is a very permissive rule for the users key in our database. 

This is bad, because it gives anyone the ability to read or write data to the database. Anyone can access the path /users/ as well as deeper paths. Not only that, but no structure is imposed on the users' data.

Access Control Rules

With these rules, we control access to the user records to logged-in users. Not only that, but users can only read or write their own data. We do this with a wildcard: $uid. This is a variable that represents the child key (variable names start with $). For example, accessing the path /users/user1, $uid is "user1"

Next, we make use of the auth variable, which represents the currently authenticated user. This is a predefined server variable supplied by Firebase. In lines 5 and 6, we're enforcing an accessibility constraint that only the authenticated user with the same id as the user record can read or write its data. In other words, for each user, read and write access is granted to /users/<uid>/, where <uid> represents the currently authenticated user id.

Other Firebase server variables are:  

nowThe current time in milliseconds since Linux epoch.
rootA RuleDataSnapshot representing the root path in the Firebase database as it exists before the attempted operation. 
newDataA RuleDataSnapshot representing the data as it would exist after the attempted operation. It includes the new data being written and existing data. 
dataA RuleDataSnapshot representing the data as it existed before the attempted operation.
authRepresents an authenticated user's token payload.

Read more about these and other server variables in the Firebase docs.

Enforcing Data Structure

We can also use Firebase rules to enforce constraints on the data in our database. 

For example, in the follow rules, in lines 8 and 11, we are ensuring rules that any new value for the first name and last name must be a string. In line 14, we make sure that age is a number. Finally, in lines 17 and 18, we're enforcing that the phone number value must be a string and of length 11.

But how do we prevent duplicate phone numbers?

Preventing Duplicates

Next, I'll show you how to prevent duplicate phone numbers.

Step 1: Normalize the Data Structure

The first thing we need to do is to modifying the root path to include a top-level /phoneNumbers/ node. So, when creating a new user, we will also add the user's phone number to this node when validation is successful. Our new data structure will look like the following:

Step 2: Enforce New Data Structure

We need to modify the security rules to enforce the data structure: 

Here, we're making sure the phone number is unique by checking if it is already a child of the /phoneNumbers/ node with the given phone number as key. In other words, we're checking that the phone number has not been registered by a user already. If it has not, then validation is successful and the write operation will be accepted—otherwise it will be rejected. 

Your app will need to add the phone number to the phone numbers list when creating a new user, and it will need to delete a user's phone number if that user is deleted.

Simulating Validation and Security Rules

You can simulate your security rules in the Firebase console by clicking the Simulator button. Add your security rules, select the type of simulation (either read or write), input some data with a path, and click the Run button: 

Firebase Simulator

If the value of the first name is a number instead of a string, validation will fail and write access is denied:

Write Access Denied

Conclusion

In this quick tip tutorial, you learned about Firebase Database security rules: how to prevent unauthorised access to data and how to make sure that data in the database are structured.

To learn more about Firebase Database security rules, refer to the official documentation. And check out some of our other Firebase tutorials and courses here on Envato Tuts+!

2017-01-10T12:58:57.000Z2017-01-10T12:58:57.000ZChike Mgbemena

Firebase Security Rules

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-27926

Firebase Realtime Database security rules are how you secure your data from unauthorised users and protect your data structure.  

In this quick tip tutorial, I will explain how to configure your database security rules properly so that only authorised users have read or write access to data. I'll also show you how to structure your data to make it easy to secure.

The Problem

Let's assume we have JSON data in our Firebase database, as in the example below:

Looking at the database, you can see that there are some issues with our data:

  1. Two users (user1 and user3) have the same phone numbers. We'd like these to be unique.
  2. user3 has a number for last name, instead of a string.
  3. user2 has only seven digits in their phone number, instead of 11. 
  4. The age value for user1 and user2 is a string, while that of user3 is a number.

With all these flaws highlighted in our data, we have lost data integrity. In the following steps, I will show you how to prevent these from occurring. 

Permissive Rules

The Firebase realtime database has the following rule types:

TypeFunction
.readDescribes if and when data is allowed to be read by users.
.writeDescribe if and when data is allowed to be written.
.validateDefines what a correctly formatted value will look like, whether it has child attributes, and the data type.
.indexOnSpecifies a child to index to support ordering and querying.

Read more about them in the Firebase docs.

Here is a very permissive rule for the users key in our database. 

This is bad, because it gives anyone the ability to read or write data to the database. Anyone can access the path /users/ as well as deeper paths. Not only that, but no structure is imposed on the users' data.

Access Control Rules

With these rules, we control access to the user records to logged-in users. Not only that, but users can only read or write their own data. We do this with a wildcard: $uid. This is a variable that represents the child key (variable names start with $). For example, accessing the path /users/user1, $uid is "user1"

Next, we make use of the auth variable, which represents the currently authenticated user. This is a predefined server variable supplied by Firebase. In lines 5 and 6, we're enforcing an accessibility constraint that only the authenticated user with the same id as the user record can read or write its data. In other words, for each user, read and write access is granted to /users/<uid>/, where <uid> represents the currently authenticated user id.

Other Firebase server variables are:  

nowThe current time in milliseconds since Linux epoch.
rootA RuleDataSnapshot representing the root path in the Firebase database as it exists before the attempted operation. 
newDataA RuleDataSnapshot representing the data as it would exist after the attempted operation. It includes the new data being written and existing data. 
dataA RuleDataSnapshot representing the data as it existed before the attempted operation.
authRepresents an authenticated user's token payload.

Read more about these and other server variables in the Firebase docs.

Enforcing Data Structure

We can also use Firebase rules to enforce constraints on the data in our database. 

For example, in the follow rules, in lines 8 and 11, we are ensuring rules that any new value for the first name and last name must be a string. In line 14, we make sure that age is a number. Finally, in lines 17 and 18, we're enforcing that the phone number value must be a string and of length 11.

But how do we prevent duplicate phone numbers?

Preventing Duplicates

Next, I'll show you how to prevent duplicate phone numbers.

Step 1: Normalize the Data Structure

The first thing we need to do is to modifying the root path to include a top-level /phoneNumbers/ node. So, when creating a new user, we will also add the user's phone number to this node when validation is successful. Our new data structure will look like the following:

Step 2: Enforce New Data Structure

We need to modify the security rules to enforce the data structure: 

Here, we're making sure the phone number is unique by checking if it is already a child of the /phoneNumbers/ node with the given phone number as key. In other words, we're checking that the phone number has not been registered by a user already. If it has not, then validation is successful and the write operation will be accepted—otherwise it will be rejected. 

Your app will need to add the phone number to the phone numbers list when creating a new user, and it will need to delete a user's phone number if that user is deleted.

Simulating Validation and Security Rules

You can simulate your security rules in the Firebase console by clicking the Simulator button. Add your security rules, select the type of simulation (either read or write), input some data with a path, and click the Run button: 

Firebase Simulator

If the value of the first name is a number instead of a string, validation will fail and write access is denied:

Write Access Denied

Conclusion

In this quick tip tutorial, you learned about Firebase Database security rules: how to prevent unauthorised access to data and how to make sure that data in the database are structured.

To learn more about Firebase Database security rules, refer to the official documentation. And check out some of our other Firebase tutorials and courses here on Envato Tuts+!

2017-01-10T12:58:57.000Z2017-01-10T12:58:57.000ZChike Mgbemena

Getting Started With a React Native App Template

$
0
0

Designing a React Native app from scratch is often a cumbersome process—especially the design part, because you have to plan for both Android and iOS devices. Not only that, but you also have to make sure your app looks nice on different screen sizes. 

This is where templates come in handy. They handle the initial design for you so that your app looks nice with minimal design effort on your part. There are a handful of such templates at CodeCanyon, a marketplace for templates and plugins. There you can find different kinds of templates geared to the specific type of app that you want to create.

In this tutorial, we'll take a look at how to use the BeoStore template. As the name suggests, BeoStore is an e-commerce app template for React Native. 

Getting the Template From the Marketplace

You can download the template by going to the BeoStore product page at CodeCanyon. It's a paid template, but it's worth the investment, because it has most of the features needed in an e-commerce app. All you need to do is configure the template and customize it to your liking. To get an idea what it offers out of the box, here are some of its highlight features:

  • Full integration with WooCommerce: if you're running a WooCommerce website, the app can display the same products which you have on your existing website.
  • Support for both iOS and Android: the app runs and looks good on both Android and iOS platforms.
  • Social logins: customers can log in using their Facebook or Google account.
  • Easy customization: everything is broken down into components. This ensures that you can easily customize the template based on your brand.
  • Push notifications: this automatically alerts your customers when there's an update to the status of their order. You can also send out push notifications for product promotions. Push notifications are implemented with Firebase.  
  • Multi-language support: out of the box you get English as the main language. Vietnamese exists as a second option, but you can also add your own language.
  • Secure payment integration: payments are done with PayPal.

If you don't have an Envato account yet, you can sign up here. Once that's done, log in to your newly created account. Then you can go back to the BeoStore page and click on the Buy Now button. 

Setting Up the Project

Once you've purchased the template, you'll get a download link to the template's archive file. Extract that and you'll get a CodeCanyon folder which contains MStore 2.2.

MStore 2.2 is the directory for the template project. Go ahead and open a new terminal window inside that directory and execute the following command:

This will install all the project dependencies. This may take a while depending on your download speed, because it has to download a lot of dependencies. Take a look at package.json if you want see the packages it needs to download.

Once that's done, there's an additional step if you want to build for iOS devices. Go to the iOS folder and execute the following:

Next, for Android, connect your mobile device to your computer and execute:

This will list all the Android devices connected to your computer. If this is the first time you're connecting the device, you should get a prompt asking you if you want to allow the computer for USB debugging. Just tap on yes once you get that prompt.

Now you can run the app on your Android device:

For iOS:

If you didn't encounter any errors, you should be greeted with the following page:

MStore Template Home page

To give you an idea of the different pages available in the template, here are a few more screenshots:

MStore Template Cart
MStore Template Checkout
MStore Template Login
MStore Template Product Page

Troubleshooting Project Setup

In this section, I've compiled a list of common project setup errors and their solutions. 

Development Server Didn't Start

If the development server didn't automatically start when you executed react-native run-android or react-native run-ios, you can manually run it by executing:

Watch Took Too Long to Load

If you get an error similar to the following:

This is because an existing Watchman instance is running. This is a component of the React Native development server. You can solve this error and shut down Watchman by executing the following commands:

Could Not Run ADB Reverse

If you're getting the following error:

It means that your Android device is running on a version that's lower than 5.0 (Lollipop). This isn't actually an error. It simply means that your Android device doesn't support adb reverse, which is used to connect the development server to your device via USB debugging. If this isn't available, React Native falls back to debugging using Wi-Fi. You can find more information about it here: Running on Device.

Something else might be causing your build to fail. You can scroll up the terminal to see if there are any errors that happened before that.

Can't Find Variable _fbBatchedBridge

If you're getting the following error and the development server is running in Wi-Fi mode, this means that you haven't set up the IP of your computer in your Android device. (This usually only comes up with Android devices below version 5.0.)

You can execute the following to show the React Native developer options on your device:

Select Dev Settings from the menu that shows up. Under the Debugging section, tap on Debug server host & port for device. Enter the internal IP assigned by your home router along with the port in which the development server is running and press OK:

Go back to the home screen of the app and execute adb shell input keyevent 82 again. This time select Reload to reload the app.

Could Not Find Firebase, App Compat, or GMS Play Services

If you're getting "could not find" errors, this means you haven't installed the corresponding package using the Android SDK Manager.

Here are the packages that need to be installed:

  • Android Support Repository
  • Android Support Library
  • Google Play Services
  • Google Repository

Make sure to also update existing packages if there's an available update.

Other Problems

If your problem doesn't involve any of the above, you can try the following:

  • Check out the documentation on troubleshooting.
  • Check out the template product comments. You can search for the error you're getting. Try to generalize and shorten the error message though—don't just search for the entire error message. If you can't find the error, you can try asking your own question in the comments thread. The support team usually replies promptly.
  • Try searching for the error on Google. Even if the results you find don't involve the use of the template, they will give you an idea on how to solve the problem.
  • Search on StackOverflow or ask a new question. Make sure to include all the necessary details (e.g. error message, steps that you've taken). There's a good article about how to ask questions on StackOverflow.

Customizing the Template

A good place to start learning how to do things in the template is its documentation:

  • Project Layout: shows where to find the different files in the template and what they're used for.
  • Migrate WooCommerce: shows how you can hook up your existing WooCommerce website to the app. Hooking up the app to your WooCommerce means that it will be able to fetch all the product categories and products in your WooCommerce store. 
  • Migrate Services: shows how to configure the app's name, Firebase (for push notifications), and social login.
  • Customize: shows how to customize the language and themes.

Be sure to check those out! I'm not going to repeat what was mentioned in the documentation. Instead, what we're going to do in this section is to actually customize the template so it looks the way we want.

Most of the project configuration settings are stored inside the app/Constants.js file. Here are a few examples of things which you can change from this file:

WooCommerce Integration: The URL of the WooCommerce store being used by the app. By default, this uses mstore.io.

Social logins: This is implemented using Auth0, an authentication platform. By default, the template only supports Google and Facebook sign-ins. But you should be able to add any service which Auth0 supports.

Icons: You can use any icon from Fontawesome, but you should prefix the name with ios-.

Theme: Colors for the different components that make up each page can also be updated. For example, if you want to change the header background color, you can update the value for TopBar:

Images: The splash screen and other images can also be updated by specifying a new path to each of the following:

These images are stored in the app/images directory, you can simply replace them if you don't want to keep the old images.

  • You can also change the PayPal options from this file. Be sure to create your own PayPal Developer Account to obtain the clientID and secretKey. Don't forget to update sandBoxMode to false when you deploy your app to production, because by default it uses sandbox mode so that no actual money will be spent on transactions.
  • To customize individual pages, you need to go to the app/containers directory. This is where you will find the files for each page. For example, if you want to customize the home page, navigate to the home folder and open the index.js file. Once opened, you'll see that the page uses the <ImageCard> component to render each product category. So if you want to add a general styling for the <ImageCard> component, you have to update the app/Components/ImageCard/index.js file. Otherwise, you can simply update the styles within the page itself:

Conclusion

This tutorial is by no means a comprehensive guide on how to use the BeoStore template. But we have covered quite a lot of ground, especially on setup troubleshooting, which the official documentation lacks. 

The next step is to hook this template up with your WooCommerce website or even repurpose it so it can be used for other types of apps.

Download the template now, or If you want to learn more about it, you can check out the documentation here. You can also find many more React Native app templates on CodeCanyon.

2017-01-11T11:27:22.000Z2017-01-11T11:27:22.000ZWernher-Bel Ancheta

Getting Started With a React Native App Template

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-27957

Designing a React Native app from scratch is often a cumbersome process—especially the design part, because you have to plan for both Android and iOS devices. Not only that, but you also have to make sure your app looks nice on different screen sizes. 

This is where templates come in handy. They handle the initial design for you so that your app looks nice with minimal design effort on your part. There are a handful of such templates at CodeCanyon, a marketplace for templates and plugins. There you can find different kinds of templates geared to the specific type of app that you want to create.

In this tutorial, we'll take a look at how to use the BeoStore template. As the name suggests, BeoStore is an e-commerce app template for React Native. 

Getting the Template From the Marketplace

You can download the template by going to the BeoStore product page at CodeCanyon. It's a paid template, but it's worth the investment, because it has most of the features needed in an e-commerce app. All you need to do is configure the template and customize it to your liking. To get an idea what it offers out of the box, here are some of its highlight features:

  • Full integration with WooCommerce: if you're running a WooCommerce website, the app can display the same products which you have on your existing website.
  • Support for both iOS and Android: the app runs and looks good on both Android and iOS platforms.
  • Social logins: customers can log in using their Facebook or Google account.
  • Easy customization: everything is broken down into components. This ensures that you can easily customize the template based on your brand.
  • Push notifications: this automatically alerts your customers when there's an update to the status of their order. You can also send out push notifications for product promotions. Push notifications are implemented with Firebase.  
  • Multi-language support: out of the box you get English as the main language. Vietnamese exists as a second option, but you can also add your own language.
  • Secure payment integration: payments are done with PayPal.

If you don't have an Envato account yet, you can sign up here. Once that's done, log in to your newly created account. Then you can go back to the BeoStore page and click on the Buy Now button. 

Setting Up the Project

Once you've purchased the template, you'll get a download link to the template's archive file. Extract that and you'll get a CodeCanyon folder which contains MStore 2.2.

MStore 2.2 is the directory for the template project. Go ahead and open a new terminal window inside that directory and execute the following command:

This will install all the project dependencies. This may take a while depending on your download speed, because it has to download a lot of dependencies. Take a look at package.json if you want see the packages it needs to download.

Once that's done, there's an additional step if you want to build for iOS devices. Go to the iOS folder and execute the following:

Next, for Android, connect your mobile device to your computer and execute:

This will list all the Android devices connected to your computer. If this is the first time you're connecting the device, you should get a prompt asking you if you want to allow the computer for USB debugging. Just tap on yes once you get that prompt.

Now you can run the app on your Android device:

For iOS:

If you didn't encounter any errors, you should be greeted with the following page:

MStore Template Home page

To give you an idea of the different pages available in the template, here are a few more screenshots:

MStore Template Cart
MStore Template Checkout
MStore Template Login
MStore Template Product Page

Troubleshooting Project Setup

In this section, I've compiled a list of common project setup errors and their solutions. 

Development Server Didn't Start

If the development server didn't automatically start when you executed react-native run-android or react-native run-ios, you can manually run it by executing:

Watch Took Too Long to Load

If you get an error similar to the following:

This is because an existing Watchman instance is running. This is a component of the React Native development server. You can solve this error and shut down Watchman by executing the following commands:

Could Not Run ADB Reverse

If you're getting the following error:

It means that your Android device is running on a version that's lower than 5.0 (Lollipop). This isn't actually an error. It simply means that your Android device doesn't support adb reverse, which is used to connect the development server to your device via USB debugging. If this isn't available, React Native falls back to debugging using Wi-Fi. You can find more information about it here: Running on Device.

Something else might be causing your build to fail. You can scroll up the terminal to see if there are any errors that happened before that.

Can't Find Variable _fbBatchedBridge

If you're getting the following error and the development server is running in Wi-Fi mode, this means that you haven't set up the IP of your computer in your Android device. (This usually only comes up with Android devices below version 5.0.)

You can execute the following to show the React Native developer options on your device:

Select Dev Settings from the menu that shows up. Under the Debugging section, tap on Debug server host & port for device. Enter the internal IP assigned by your home router along with the port in which the development server is running and press OK:

Go back to the home screen of the app and execute adb shell input keyevent 82 again. This time select Reload to reload the app.

Could Not Find Firebase, App Compat, or GMS Play Services

If you're getting "could not find" errors, this means you haven't installed the corresponding package using the Android SDK Manager.

Here are the packages that need to be installed:

  • Android Support Repository
  • Android Support Library
  • Google Play Services
  • Google Repository

Make sure to also update existing packages if there's an available update.

Other Problems

If your problem doesn't involve any of the above, you can try the following:

  • Check out the documentation on troubleshooting.
  • Check out the template product comments. You can search for the error you're getting. Try to generalize and shorten the error message though—don't just search for the entire error message. If you can't find the error, you can try asking your own question in the comments thread. The support team usually replies promptly.
  • Try searching for the error on Google. Even if the results you find don't involve the use of the template, they will give you an idea on how to solve the problem.
  • Search on StackOverflow or ask a new question. Make sure to include all the necessary details (e.g. error message, steps that you've taken). There's a good article about how to ask questions on StackOverflow.

Customizing the Template

A good place to start learning how to do things in the template is its documentation:

  • Project Layout: shows where to find the different files in the template and what they're used for.
  • Migrate WooCommerce: shows how you can hook up your existing WooCommerce website to the app. Hooking up the app to your WooCommerce means that it will be able to fetch all the product categories and products in your WooCommerce store. 
  • Migrate Services: shows how to configure the app's name, Firebase (for push notifications), and social login.
  • Customize: shows how to customize the language and themes.

Be sure to check those out! I'm not going to repeat what was mentioned in the documentation. Instead, what we're going to do in this section is to actually customize the template so it looks the way we want.

Most of the project configuration settings are stored inside the app/Constants.js file. Here are a few examples of things which you can change from this file:

WooCommerce Integration: The URL of the WooCommerce store being used by the app. By default, this uses mstore.io.

Social logins: This is implemented using Auth0, an authentication platform. By default, the template only supports Google and Facebook sign-ins. But you should be able to add any service which Auth0 supports.

Icons: You can use any icon from Fontawesome, but you should prefix the name with ios-.

Theme: Colors for the different components that make up each page can also be updated. For example, if you want to change the header background color, you can update the value for TopBar:

Images: The splash screen and other images can also be updated by specifying a new path to each of the following:

These images are stored in the app/images directory, you can simply replace them if you don't want to keep the old images.

  • You can also change the PayPal options from this file. Be sure to create your own PayPal Developer Account to obtain the clientID and secretKey. Don't forget to update sandBoxMode to false when you deploy your app to production, because by default it uses sandbox mode so that no actual money will be spent on transactions.
  • To customize individual pages, you need to go to the app/containers directory. This is where you will find the files for each page. For example, if you want to customize the home page, navigate to the home folder and open the index.js file. Once opened, you'll see that the page uses the <ImageCard> component to render each product category. So if you want to add a general styling for the <ImageCard> component, you have to update the app/Components/ImageCard/index.js file. Otherwise, you can simply update the styles within the page itself:

Conclusion

This tutorial is by no means a comprehensive guide on how to use the BeoStore template. But we have covered quite a lot of ground, especially on setup troubleshooting, which the official documentation lacks. 

The next step is to hook this template up with your WooCommerce website or even repurpose it so it can be used for other types of apps.

Download the template now, or If you want to learn more about it, you can check out the documentation here. You can also find many more React Native app templates on CodeCanyon.

2017-01-11T11:27:22.000Z2017-01-11T11:27:22.000ZWernher-Bel Ancheta

New Code eBooks Available for Subscribers

$
0
0

Do you want to learn more about developing iOS apps with Swift? How about building web applications with Go, or functional programming in JavaScript? Our latest batch of eBooks will teach you all you need to know about these topics and more.

our new selection of code eBooks

What You’ll Learn

In the past couple of months we’ve made 16 new eBooks available for Envato Tuts+ subscribers to download. Here’s a selection of those eBooks and a summary of what you can learn from them.

1. TypeScript Design Patterns

TypeScript Design Patterns
TypeScript Design Patterns

In programming, there are several problems that occur frequently. To solve these problems, there are various repeatable solutions that are known as design patterns. Design patterns are a great way to improve the efficiency of your programs and improve your productivity.

This book is a collection of the most important patterns you need to improve your applications’ performance and your productivity. The journey starts by explaining the current challenges when designing and developing an application and how you can solve these challenges by applying the correct design pattern and best practices.

2. Swift: Developing iOS Applications

Swift Developing iOS Applications
Swift: Developing iOS Applications

The Swift: Developing iOS Applications eBook will take you on a journey to become an efficient iOS and macOS developer, with the latest trending topic in town. Right from the basics to the advanced level topics, this eBook will cover everything in detail. 

The learning path consists of four modules. Each of these modules is a mini-book in its own right, and as you complete each one, you’ll gain key skills and be ready for the material in the next module.

3. Python: Journey From Novice to Expert

Python Journey from Novice to Expert
Python: Journey from Novice to Expert

Python is a dynamic and powerful programming language, having its application in a wide range of domains. It has an easy-to-use, simple syntax, and a powerful library, which includes hundreds of modules to provide routines for a wide range of applications, thus making it a popular language among programming enthusiasts.

This eBook will take you on a journey from basic programming practices to high-end tools and techniques that will give you an edge over your peers.

4. Learning GraphQL and Relay

Learning GraphQL and Relay
Learning GraphQL and Relay

There’s a new choice for implementing APIs: the open-source and Facebook-created GraphQL specification. Designed to solve many of the issues of working with REST, GraphQL comes alongside RelayJS, a React library for querying a server that implements the GraphQL specification. This book takes you quickly and simply through the skills you need to be able to build production-ready applications with both GraphQL and RelayJS.

5. JavaScript: Functional Programming for JavaScript Developers

JavaScript  Functional Programming for JavaScript Developers
JavaScript : Functional Programming for JavaScript Developers

Functional programming is a way of writing cleaner code through clever ways of mutating, combining, and using functions. And JavaScript provides an excellent medium for this approach. By learning how to expose JavaScript's true identity as a functional language, we can implement web apps that are more powerful, easier to maintain and more reliable.

This book will take you on a journey to show you how functional programming when combined with other techniques makes JavaScript programming more efficient.

6. Ionic 2 Blueprints

Ionic 2 Blueprints
Ionic 2 Blueprints

Ionic 2, the latest version of Ionic Mobile SDK, is built on the top of latest technologies such as Angular 2, TypeScript, SASS, and lot more. The idea behind Ionic 2 is to make the entire app development process even more fun.

This book makes it possible to build fun and engaging apps using Ionic 2. You will learn how to use various Ionic components, integrate external services, derive capabilities, and most importantly how to make professional apps with Ionic 2. 

By the end of this book, you will be able to proudly call yourself a pro Ionic developer who can create a host of different apps with Ionic, and you’ll have a deeper practical understanding of Ionic.

7. Go: Building Web Applications

Go Building Web Applications
Go: Building Web Applications

Go is an open-source programming language that makes it easy to build simple, reliable, and efficient software. It is a statically typed language with syntax loosely derived from that of C, adding garbage collection, type safety, some dynamic-typing capabilities, additional built-in types such as variable-length arrays and key-value maps, and a large standard library.

This eBook starts with a walkthrough of the topics most critical to anyone building a new web application. Whether it’s keeping your application secure, connecting to your database, enabling token-based authentication, or utilizing logic-less templates, this book has you covered.

8. Django: Web Development With Python

Django Web Development with Python
Django: Web Development With Python

Data science is hot right now, and the need for multitalented developers is greater than ever before. A basic grounding in building apps with a framework as minimalistic, powerful, and easy-to-learn as Django will be a useful skill to launch your career as an entrepreneur or web developer. 

Django is a web framework that was designed to strike a balance between rapid web development and high performance. This book will take you on a journey to become an efficient web developer thoroughly understanding the key concepts of Django framework. By the end of the four modules, you will be able to leverage the Django framework to develop a fully functional web application with minimal effort.

Start Learning With a Yearly Subscription

Subscribe to Envato Tuts+ for access to our library of hundreds of eBooks. With a Yearly subscription, you can download up to five eBooks per month, while the Yearly Pro subscription gives you unlimited access.

2017-01-12T18:28:00.000Z2017-01-12T18:28:00.000ZAndrew Blackman

New Code eBooks Available for Subscribers

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-27982

Do you want to learn more about developing iOS apps with Swift? How about building web applications with Go, or functional programming in JavaScript? Our latest batch of eBooks will teach you all you need to know about these topics and more.

our new selection of code eBooks

What You’ll Learn

In the past couple of months we’ve made 16 new eBooks available for Envato Tuts+ subscribers to download. Here’s a selection of those eBooks and a summary of what you can learn from them.

1. TypeScript Design Patterns

TypeScript Design Patterns
TypeScript Design Patterns

In programming, there are several problems that occur frequently. To solve these problems, there are various repeatable solutions that are known as design patterns. Design patterns are a great way to improve the efficiency of your programs and improve your productivity.

This book is a collection of the most important patterns you need to improve your applications’ performance and your productivity. The journey starts by explaining the current challenges when designing and developing an application and how you can solve these challenges by applying the correct design pattern and best practices.

2. Swift: Developing iOS Applications

Swift Developing iOS Applications
Swift: Developing iOS Applications

The Swift: Developing iOS Applications eBook will take you on a journey to become an efficient iOS and macOS developer, with the latest trending topic in town. Right from the basics to the advanced level topics, this eBook will cover everything in detail. 

The learning path consists of four modules. Each of these modules is a mini-book in its own right, and as you complete each one, you’ll gain key skills and be ready for the material in the next module.

3. Python: Journey From Novice to Expert

Python Journey from Novice to Expert
Python: Journey from Novice to Expert

Python is a dynamic and powerful programming language, having its application in a wide range of domains. It has an easy-to-use, simple syntax, and a powerful library, which includes hundreds of modules to provide routines for a wide range of applications, thus making it a popular language among programming enthusiasts.

This eBook will take you on a journey from basic programming practices to high-end tools and techniques that will give you an edge over your peers.

4. Learning GraphQL and Relay

Learning GraphQL and Relay
Learning GraphQL and Relay

There’s a new choice for implementing APIs: the open-source and Facebook-created GraphQL specification. Designed to solve many of the issues of working with REST, GraphQL comes alongside RelayJS, a React library for querying a server that implements the GraphQL specification. This book takes you quickly and simply through the skills you need to be able to build production-ready applications with both GraphQL and RelayJS.

5. JavaScript: Functional Programming for JavaScript Developers

JavaScript  Functional Programming for JavaScript Developers
JavaScript : Functional Programming for JavaScript Developers

Functional programming is a way of writing cleaner code through clever ways of mutating, combining, and using functions. And JavaScript provides an excellent medium for this approach. By learning how to expose JavaScript's true identity as a functional language, we can implement web apps that are more powerful, easier to maintain and more reliable.

This book will take you on a journey to show you how functional programming when combined with other techniques makes JavaScript programming more efficient.

6. Ionic 2 Blueprints

Ionic 2 Blueprints
Ionic 2 Blueprints

Ionic 2, the latest version of Ionic Mobile SDK, is built on the top of latest technologies such as Angular 2, TypeScript, SASS, and lot more. The idea behind Ionic 2 is to make the entire app development process even more fun.

This book makes it possible to build fun and engaging apps using Ionic 2. You will learn how to use various Ionic components, integrate external services, derive capabilities, and most importantly how to make professional apps with Ionic 2. 

By the end of this book, you will be able to proudly call yourself a pro Ionic developer who can create a host of different apps with Ionic, and you’ll have a deeper practical understanding of Ionic.

7. Go: Building Web Applications

Go Building Web Applications
Go: Building Web Applications

Go is an open-source programming language that makes it easy to build simple, reliable, and efficient software. It is a statically typed language with syntax loosely derived from that of C, adding garbage collection, type safety, some dynamic-typing capabilities, additional built-in types such as variable-length arrays and key-value maps, and a large standard library.

This eBook starts with a walkthrough of the topics most critical to anyone building a new web application. Whether it’s keeping your application secure, connecting to your database, enabling token-based authentication, or utilizing logic-less templates, this book has you covered.

8. Django: Web Development With Python

Django Web Development with Python
Django: Web Development With Python

Data science is hot right now, and the need for multitalented developers is greater than ever before. A basic grounding in building apps with a framework as minimalistic, powerful, and easy-to-learn as Django will be a useful skill to launch your career as an entrepreneur or web developer. 

Django is a web framework that was designed to strike a balance between rapid web development and high performance. This book will take you on a journey to become an efficient web developer thoroughly understanding the key concepts of Django framework. By the end of the four modules, you will be able to leverage the Django framework to develop a fully functional web application with minimal effort.

Start Learning With a Yearly Subscription

Subscribe to Envato Tuts+ for access to our library of hundreds of eBooks. With a Yearly subscription, you can download up to five eBooks per month, while the Yearly Pro subscription gives you unlimited access.

2017-01-12T18:28:00.000Z2017-01-12T18:28:00.000ZAndrew Blackman

Introduction to Android Things

$
0
0

For over a year, Google worked with the Project Brillo operating system (which was built on the lower levels of Android) for Internet of Things (IoT) connected devices, even going so far as to have lessons and talks on it during the Ubiquity Dev Summit in January of 2016. 

In December of 2016, Google released an updated version of this operating system with another tier that allows Android application developers to use a stripped-down version of Android when creating connected devices. Although Android Things is currently in an early developer preview state, it looks promising as an IoT platform for quickly creating prototypes and supporting users at scale.

In this article, I'll give you an introduction to how Android Things works and look briefly at some examples of how you could use it.

What Is Android Things?

Android Things is a lightweight version of Android that can be flashed onto different hardware prototyping boards, in order to easily create connected Internet of Things (IoT) devices. This makes embedded coding accessible to developers who might not have previous experience. With Android Things, Google has also provided a library that you can use to build apps that read from and write to different pins on the boards, allowing you to hook up different sensors and actuators to interact with the world.

So what makes Android Things different than other IoT prototyping solutions? Google has done a lot of the legwork to make specific hardware prototyping boards work, and will continue to provide updates to support built-in Bluetooth, wireless, software updates, and other functionality. 

This means that you, as a developer and creator, can start by prototyping your IoT device using a development board such as Raspberry Pi. Then, when you're ready to take your product to market, you can design a stripped-down version of the hardware to save hardware production costs.

Current Device and Feature Support

At the time of this article, Android Things supports three prototyping boards: the Raspberry Pi 3 Model B, the Intel Edison with Arduino breakout board, and the NXP Pico i.MX6UL

While this may seem limited, a restricted supported hardware list allows Google to fully support these common prototyping boards and provides developers with a sturdy platform that has been tested and certified. 

Intel Edison with Arduino Breakout Prototyping Board

In addition to the previously mentioned three boards, Android Things will soon support the Intel Joule 570x and the NXP Argon i.MX6UL, giving you more hardware options for development.

Raspberry Pi 3 Model B Prototyping Board

Once you have a prototyping board, you will want to know what you can build with it. 

While we will go over the process of flashing a board and building connected projects in later tutorials, you can find a list of sample projects using drivers provided by Google for various sensors and actuators on their Android Things Driver Samples GitHub page

Some driver examples include servo motors, Pulse Width Modulation (PWM) speakers, buttons, GPS sensors, and HT16K33 supported alphanumeric segment display.  

7-segment and 14-segment HT16K33 backpack displays

In addition, you can read the source for these drivers on GitHub to create your own drivers for digital sensors or digital/PWM actuators, such as this quick example that I have written for the HC SR501 motion detector sensor.

HC SR501 Motion Detector Sensor

One thing to remember is that, at the time of this writing, Android Things is in the first iteration of its developer preview. This means that, because it's an early release for testing and feedback, some features are currently unavailable or may be buggy as the platform is tested and built out. 

Currently Bluetooth communication is not enabled with the boards, and support for simple analog sensors is not included in the Android Things general-purpose input/output (GPIO) classes—though there is a technical reasoning for this, and you can still use SPI and I2C, as mentioned in this AOSP issue

As this platform is still new, there are not many drivers for sensors or other hardware, so developers using the platform will need to either create their own drivers or make do with what is currently available or open sourced by other developers in the Android Things community

Limitless Possibilities

One of the best things about building Internet of Things devices is that you aren't limited to the hardware that ships with a phone, but are able to build out complex devices that fit the needs of your project. 

Although you may need to write the drivers for your own actuators and sensors, this process is still relatively straightforward given that the platform uses Java and an Android base, so you don't need to dig into low-level languages to make your product work. This means if you decide to make an animated skeleton that uses motion detection and servo motors to move, you can!

 

In addition to being able to support new hardware, you get valuable portions of the Android ecosystem to work with. Using already supported features from Android, such as the Camera API, Play Services and Firebase, you can easily build a device that takes a picture through an Internet-connected device and attach it to your back-end service, such as Firebase Storage, or analyze the image through Google Play Service's vision API.

Raspberry Pi with Camera Module

Conclusion

Given the ability to create your own devices and easily interact with Google Play Services and other back-end services (Firebase, machine learning services, etc.), Android Things promises to provide an easy-to-use platform for quickly creating new connected devices that can be brought to market or used for your own personal projects.

Stay tuned for some in-depth tutorials on getting set up and building projects with Android Things. To learn more about some of these related technologies, check out our other tutorials here on Envato Tuts+!

2017-01-17T10:16:57.000Z2017-01-17T10:16:57.000ZPaul Trebilcox-Ruiz

Introduction to Android Things

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-27892

For over a year, Google worked with the Project Brillo operating system (which was built on the lower levels of Android) for Internet of Things (IoT) connected devices, even going so far as to have lessons and talks on it during the Ubiquity Dev Summit in January of 2016. 

In December of 2016, Google released an updated version of this operating system with another tier that allows Android application developers to use a stripped-down version of Android when creating connected devices. Although Android Things is currently in an early developer preview state, it looks promising as an IoT platform for quickly creating prototypes and supporting users at scale.

In this article, I'll give you an introduction to how Android Things works and look briefly at some examples of how you could use it.

What Is Android Things?

Android Things is a lightweight version of Android that can be flashed onto different hardware prototyping boards, in order to easily create connected Internet of Things (IoT) devices. This makes embedded coding accessible to developers who might not have previous experience. With Android Things, Google has also provided a library that you can use to build apps that read from and write to different pins on the boards, allowing you to hook up different sensors and actuators to interact with the world.

So what makes Android Things different than other IoT prototyping solutions? Google has done a lot of the legwork to make specific hardware prototyping boards work, and will continue to provide updates to support built-in Bluetooth, wireless, software updates, and other functionality. 

This means that you, as a developer and creator, can start by prototyping your IoT device using a development board such as Raspberry Pi. Then, when you're ready to take your product to market, you can design a stripped-down version of the hardware to save hardware production costs.

Current Device and Feature Support

At the time of this article, Android Things supports three prototyping boards: the Raspberry Pi 3 Model B, the Intel Edison with Arduino breakout board, and the NXP Pico i.MX6UL

While this may seem limited, a restricted supported hardware list allows Google to fully support these common prototyping boards and provides developers with a sturdy platform that has been tested and certified. 

Intel Edison with Arduino Breakout Prototyping Board

In addition to the previously mentioned three boards, Android Things will soon support the Intel Joule 570x and the NXP Argon i.MX6UL, giving you more hardware options for development.

Raspberry Pi 3 Model B Prototyping Board

Once you have a prototyping board, you will want to know what you can build with it. 

While we will go over the process of flashing a board and building connected projects in later tutorials, you can find a list of sample projects using drivers provided by Google for various sensors and actuators on their Android Things Driver Samples GitHub page

Some driver examples include servo motors, Pulse Width Modulation (PWM) speakers, buttons, GPS sensors, and HT16K33 supported alphanumeric segment display.  

7-segment and 14-segment HT16K33 backpack displays

In addition, you can read the source for these drivers on GitHub to create your own drivers for digital sensors or digital/PWM actuators, such as this quick example that I have written for the HC SR501 motion detector sensor.

HC SR501 Motion Detector Sensor

One thing to remember is that, at the time of this writing, Android Things is in the first iteration of its developer preview. This means that, because it's an early release for testing and feedback, some features are currently unavailable or may be buggy as the platform is tested and built out. 

Currently Bluetooth communication is not enabled with the boards, and support for simple analog sensors is not included in the Android Things general-purpose input/output (GPIO) classes—though there is a technical reasoning for this, and you can still use SPI and I2C, as mentioned in this AOSP issue

As this platform is still new, there are not many drivers for sensors or other hardware, so developers using the platform will need to either create their own drivers or make do with what is currently available or open sourced by other developers in the Android Things community

Limitless Possibilities

One of the best things about building Internet of Things devices is that you aren't limited to the hardware that ships with a phone, but are able to build out complex devices that fit the needs of your project. 

Although you may need to write the drivers for your own actuators and sensors, this process is still relatively straightforward given that the platform uses Java and an Android base, so you don't need to dig into low-level languages to make your product work. This means if you decide to make an animated skeleton that uses motion detection and servo motors to move, you can!

 

In addition to being able to support new hardware, you get valuable portions of the Android ecosystem to work with. Using already supported features from Android, such as the Camera API, Play Services and Firebase, you can easily build a device that takes a picture through an Internet-connected device and attach it to your back-end service, such as Firebase Storage, or analyze the image through Google Play Service's vision API.

Raspberry Pi with Camera Module

Conclusion

Given the ability to create your own devices and easily interact with Google Play Services and other back-end services (Firebase, machine learning services, etc.), Android Things promises to provide an easy-to-use platform for quickly creating new connected devices that can be brought to market or used for your own personal projects.

Stay tuned for some in-depth tutorials on getting set up and building projects with Android Things. To learn more about some of these related technologies, check out our other tutorials here on Envato Tuts+!

2017-01-17T10:16:57.000Z2017-01-17T10:16:57.000ZPaul Trebilcox-Ruiz

Swift from Scratch: Introduction

$
0
0


In 2014, Apple took the developer community by surprise with the introduction of Swift, a brand new programming language. Swift has come a long way, and it is hard to believe that the language is celebrating its third anniversary this year. A few months ago, Apple released Swift 3, a major milestone for the language. In this series, I'll teach you the fundamentals of Swift 3.

Swift feels familiar if you have used Objective-C to develop iOS or macOS applications, but there are a number of important differences. I'll kick this series off by showing you in what ways Swift differs from Objective-C and why those differences are a good thing. Let's get started.

1. Prerequisites

Programming

Throughout this series, I make references to Objective-C and compare the Swift programming language with Objective-C. However, to follow along, there is no need to be familiar with Objective-C.

That said, it is important that you have experience with a programming language. While this series focuses on Swift, it doesn't cover the basics of programming. I expect you to be familiar with variables, constants, functions, control flow, and object-oriented programming.

If you are familiar with Objective-C, Java, Ruby, PHP, or JavaScript, then you won't have problems understanding the concepts explained in this series. As a matter of fact, you will notice that Swift shares similarities with a number of popular programming languages, including Objective-C.

Xcode

Swift 3 is only supported by Xcode 8, and you need to install the latest version of Apple's IDE (Integrated Development Environment) to follow along. You can download Xcode either from the App Store or Apple's Developer Center.

2. Swift

Compared to Objective-C or Java, Swift is an expressive, succinct language that often reminds me of Ruby and JavaScript. Even though the creator of Swift, Chris Lattner, took inspiration from other languages, Swift is very much a language that stands on its own feet.

As you may know, Objective-C is a strict superset of C. Swift, however, is not. While Swift uses curly braces and shares several keywords with the C programming language, it is not compatible with C.

Swift is a modern programming language that feels intuitive, especially if you are used to Java or C-based programming languages like Objective-C. During the development and design of Swift, Chris Lattner focused on a number of key characteristics that ended up defining the language.

Safety

Safety is one of Swift's foundations. In this series, you quickly learn that Swift is very different from Objective-C in terms of safety, and this directly affects the code you write. If you have worked with Objective-C, this takes some getting used to.

LLVM

Chris Lattner also designed the LLVM (Low Level Virtual Machine) compiler, and it shouldn't be a surprise that Swift is built with the LLVM compiler. The result is speed, power, and reliability. Swift is significantly faster than Objective-C in most scenarios. Read this article by Jesse Squires if you are interested in the nitty-gritty details.

Type Inference

Type safety is one of Swift's key features. Swift inspects your code at compile time and warns you about type mismatches. This means that you can catch errors early, avoiding a range of common bugs.

Luckily, Swift helps you with this. Swift is often smart enough to infer the type of variables and constants, which means that you don't have to explicitly declare the type of each variable or constant. In the following code snippet, we declare a variable a and assign it the value "this is a string". Swift is smart enough to infer that a is of type String.

This is a trivial example, but you'll find out that Swift can also handle more complex scenarios.

Variables and Constants

Constants are useful in C and Objective-C, but most developers use them sparingly. In Swift, constants are just as important and common as variables. If the value of a variable doesn't change, then that variable should be a constant. Variables are declared using the var keyword. Constants are declared using the let keyword.

Not only does this improve the intent, it also improves safety by ensuring that the variable's value is not accidentally changed. We take a closer look at variables and constants later in this tutorial.

Semicolons

In Swift, semicolons are not required. You can use semicolons, for example, to write multiple statements on the same line, but they are optional. Take a look at the following example to better understand the use of semicolons in Swift.

We are only scratching the surface. You'll learn about many more features and concepts throughout this series. Instead of overloading you with more theory, I suggest you get your feet wet by writing some code. This brings us to one of the best features of Swift and Xcode: playgrounds.

3. Playgrounds

Apple introduced playgrounds in Xcode 6. Playgrounds are the perfect tool for learning Swift. A playground is an interactive environment in which you can write Swift and immediately see the result. Not only does it make learning Swift more fun, it is much faster and more intuitive than setting up a project in Xcode.

As a matter of fact, it is so easy that you might as well jump in and create your first playground. Open Xcode 8 and select New > Playground... from the File menu. Name the playground and set Platform to iOS.

Create a New Playground

Tell Xcode where you would like to save the playground and click Create. Instead of creating a project with a bunch of files and folders, a playground is nothing more than a file with a .playground extension. A playground is more than a file under the hood, but that isn't something we need to worry about for now.

The user interface you are presented with couldn't be simpler. On the left, you see a code editor with a comment at the top, an import statement for importing the UIKit framework, and one line of code that shouldn't be too difficult to understand. On the right, you see the output or results generated by the code on the left.

The User Interface of a Playground in Xcode

Let's take a moment to understand the code in your new playground. The first line should look familiar if you have worked with Objective-C, PHP, or JavaScript. Comments in Swift start with two forward slashes or, in the case of multiline comments, start with /* and end with */.

Because we selected iOS as the platform when we created the playground, Xcode added an import statement for the UIKit framework. This gives us access to every symbol defined in the UIKit framework.

The third line looks familiar, but there are a few details that need clarifying. We declare a variable str and assign it a string. This line of code is easy to understand, but note that the variable's name is preceded by the var keyword instead of the variable's type as you would expect in Objective-C. The same statement in Objective-C would look something like this.

In Objective-C, we would replace the var keyword with the variable's type, prefix the string with an @ symbol, and end the statement with a semicolon. It is important to understand that the var keyword doesn't replace the type specifier in Objective-C. It is nothing more than a keyword to indicate that str is a variable, not a constant. 

Let me explain this in more detail. Add the following line of code to the playground.

The let keyword tells the compiler that hello is a constant, not a variable. Both str and hello are of type String, but str is a variable and hello is a constant. The difference is simple to understand if we add two more lines of code.

Assigning a new value to str doesn't pose a problem. Assigning a new value to hello, however, results in an error. Xcode tells us that it cannot assign a new value to hello, because hello is a constant, not a variable. This is another key feature of Swift, which will take some getting used to.

The value of a constant cannot be changed

The idea is simple. If the value of a variable is not going to change, then it should be a constant instead of a variable. While this may seem like a semantic detail, I guarantee that it makes your code safer and less prone to errors. Be prepared, because you are going to see the let keyword a lot in this series.

We use playgrounds extensively throughout this series because it is a great way to learn the language. There are a few other powerful playground features that we haven't covered yet, but we first need to understand the basics of the Swift language before we can benefit from those.

Conclusion

I still have to meet a developer who doesn't like Swift, and that's saying something. Swift has a number of concepts that require some getting used to, but I am confident that you too will end up enjoying Swift and appreciating its power, elegance, and concision. In the next installment of this series, we'll start exploring the basics of Swift.

If you want to get up and running with the Swift language quickly, check out our course on creating iOS apps with Swift.

Or check out some of our other tutorials and courses on Swift and iOS development!

2017-01-23T20:03:31.218Z2017-01-23T20:03:31.218ZBart Jacobs

Swift from Scratch: Introduction

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-22598


In 2014, Apple took the developer community by surprise with the introduction of Swift, a brand new programming language. Swift has come a long way, and it is hard to believe that the language is celebrating its third anniversary this year. A few months ago, Apple released Swift 3, a major milestone for the language. In this series, I'll teach you the fundamentals of Swift 3.

Swift feels familiar if you have used Objective-C to develop iOS or macOS applications, but there are a number of important differences. I'll kick this series off by showing you in what ways Swift differs from Objective-C and why those differences are a good thing. Let's get started.

1. Prerequisites

Programming

Throughout this series, I make references to Objective-C and compare the Swift programming language with Objective-C. However, to follow along, there is no need to be familiar with Objective-C.

That said, it is important that you have experience with a programming language. While this series focuses on Swift, it doesn't cover the basics of programming. I expect you to be familiar with variables, constants, functions, control flow, and object-oriented programming.

If you are familiar with Objective-C, Java, Ruby, PHP, or JavaScript, then you won't have problems understanding the concepts explained in this series. As a matter of fact, you will notice that Swift shares similarities with a number of popular programming languages, including Objective-C.

Xcode

Swift 3 is only supported by Xcode 8, and you need to install the latest version of Apple's IDE (Integrated Development Environment) to follow along. You can download Xcode either from the App Store or Apple's Developer Center.

2. Swift

Compared to Objective-C or Java, Swift is an expressive, succinct language that often reminds me of Ruby and JavaScript. Even though the creator of Swift, Chris Lattner, took inspiration from other languages, Swift is very much a language that stands on its own feet.

As you may know, Objective-C is a strict superset of C. Swift, however, is not. While Swift uses curly braces and shares several keywords with the C programming language, it is not compatible with C.

Swift is a modern programming language that feels intuitive, especially if you are used to Java or C-based programming languages like Objective-C. During the development and design of Swift, Chris Lattner focused on a number of key characteristics that ended up defining the language.

Safety

Safety is one of Swift's foundations. In this series, you quickly learn that Swift is very different from Objective-C in terms of safety, and this directly affects the code you write. If you have worked with Objective-C, this takes some getting used to.

LLVM

Chris Lattner also designed the LLVM (Low Level Virtual Machine) compiler, and it shouldn't be a surprise that Swift is built with the LLVM compiler. The result is speed, power, and reliability. Swift is significantly faster than Objective-C in most scenarios. Read this article by Jesse Squires if you are interested in the nitty-gritty details.

Type Inference

Type safety is one of Swift's key features. Swift inspects your code at compile time and warns you about type mismatches. This means that you can catch errors early, avoiding a range of common bugs.

Luckily, Swift helps you with this. Swift is often smart enough to infer the type of variables and constants, which means that you don't have to explicitly declare the type of each variable or constant. In the following code snippet, we declare a variable a and assign it the value "this is a string". Swift is smart enough to infer that a is of type String.

This is a trivial example, but you'll find out that Swift can also handle more complex scenarios.

Variables and Constants

Constants are useful in C and Objective-C, but most developers use them sparingly. In Swift, constants are just as important and common as variables. If the value of a variable doesn't change, then that variable should be a constant. Variables are declared using the var keyword. Constants are declared using the let keyword.

Not only does this improve the intent, it also improves safety by ensuring that the variable's value is not accidentally changed. We take a closer look at variables and constants later in this tutorial.

Semicolons

In Swift, semicolons are not required. You can use semicolons, for example, to write multiple statements on the same line, but they are optional. Take a look at the following example to better understand the use of semicolons in Swift.

We are only scratching the surface. You'll learn about many more features and concepts throughout this series. Instead of overloading you with more theory, I suggest you get your feet wet by writing some code. This brings us to one of the best features of Swift and Xcode: playgrounds.

3. Playgrounds

Apple introduced playgrounds in Xcode 6. Playgrounds are the perfect tool for learning Swift. A playground is an interactive environment in which you can write Swift and immediately see the result. Not only does it make learning Swift more fun, it is much faster and more intuitive than setting up a project in Xcode.

As a matter of fact, it is so easy that you might as well jump in and create your first playground. Open Xcode 8 and select New > Playground... from the File menu. Name the playground and set Platform to iOS.

Create a New Playground

Tell Xcode where you would like to save the playground and click Create. Instead of creating a project with a bunch of files and folders, a playground is nothing more than a file with a .playground extension. A playground is more than a file under the hood, but that isn't something we need to worry about for now.

The user interface you are presented with couldn't be simpler. On the left, you see a code editor with a comment at the top, an import statement for importing the UIKit framework, and one line of code that shouldn't be too difficult to understand. On the right, you see the output or results generated by the code on the left.

The User Interface of a Playground in Xcode

Let's take a moment to understand the code in your new playground. The first line should look familiar if you have worked with Objective-C, PHP, or JavaScript. Comments in Swift start with two forward slashes or, in the case of multiline comments, start with /* and end with */.

Because we selected iOS as the platform when we created the playground, Xcode added an import statement for the UIKit framework. This gives us access to every symbol defined in the UIKit framework.

The third line looks familiar, but there are a few details that need clarifying. We declare a variable str and assign it a string. This line of code is easy to understand, but note that the variable's name is preceded by the var keyword instead of the variable's type as you would expect in Objective-C. The same statement in Objective-C would look something like this.

In Objective-C, we would replace the var keyword with the variable's type, prefix the string with an @ symbol, and end the statement with a semicolon. It is important to understand that the var keyword doesn't replace the type specifier in Objective-C. It is nothing more than a keyword to indicate that str is a variable, not a constant. 

Let me explain this in more detail. Add the following line of code to the playground.

The let keyword tells the compiler that hello is a constant, not a variable. Both str and hello are of type String, but str is a variable and hello is a constant. The difference is simple to understand if we add two more lines of code.

Assigning a new value to str doesn't pose a problem. Assigning a new value to hello, however, results in an error. Xcode tells us that it cannot assign a new value to hello, because hello is a constant, not a variable. This is another key feature of Swift, which will take some getting used to.

The value of a constant cannot be changed

The idea is simple. If the value of a variable is not going to change, then it should be a constant instead of a variable. While this may seem like a semantic detail, I guarantee that it makes your code safer and less prone to errors. Be prepared, because you are going to see the let keyword a lot in this series.

We use playgrounds extensively throughout this series because it is a great way to learn the language. There are a few other powerful playground features that we haven't covered yet, but we first need to understand the basics of the Swift language before we can benefit from those.

Conclusion

I still have to meet a developer who doesn't like Swift, and that's saying something. Swift has a number of concepts that require some getting used to, but I am confident that you too will end up enjoying Swift and appreciating its power, elegance, and concision. In the next installment of this series, we'll start exploring the basics of Swift.

If you want to get up and running with the Swift language quickly, check out our course on creating iOS apps with Swift.

Or check out some of our other tutorials and courses on Swift and iOS development!

2017-01-23T20:03:31.218Z2017-01-23T20:03:31.218ZBart Jacobs

Using the Speech Recognition API in iOS 10

$
0
0
Final product image
What You'll Be Creating

Introduction

Siri has been a core feature of iOS since it was introduced back in 2011. Now, iOS 10 brings new features to allow developers to interact with Siri. In particular, two new frameworks are now available: Speech and SiriKit. 

Today, we are going to take a look at the Speech framework, which allows us to easily translate audio into text. You'll learn how to build a real-life app that uses the speech recognition API to check the status of a flight.

If you want to learn more about SiriKit, I covered it in my Create SiriKit Extensions in iOS 10 tutorial. For more on the other new features for developers in iOS 10, check out Markus Mühlberger's course, right here on Envato Tuts+.

Usage

Speech recognition is the process of translating live or pre-recorded audio to transcribed text. Since Siri was introduced in iOS 5, there has been a microphone button in the system keyboard that enables users to easily dictate. This feature can be used with any UIKit text input, and it doesn't require you to write additional code beyond what you would write to support a standard text input. It's really fast and easy to use, but it comes with a few limitations:

  • The keyboard is always present when dictating.
  • The language cannot be customized by the app itself.
  • The app cannot be notified when dictation starts and finishes.
Dictation in the iOS keyboard

To allow developers to build more customizable and powerful applications with the same dictation technology as Siri, Apple created the Speech framework. It allows every device that runs iOS 10 to translate audio to text in over 50 languages and dialects.

This new API is much more powerful because it doesn't just provide a simple transcription service, but it also provides alternative interpretations of what the user may have said. You can control when to stop a dictation, you can show results as your user speaks, and the speech recognition engine will automatically adapt to the user preferences (language, vocabulary, names, etc.). 

An interesting feature is support for transcribing pre-recorded audio. If you are building an instant messaging app, for example, you could use this functionality to transcribe the text of new audio messages.

Setup

First of all, you will need to ask the user for permission to transmit their voice to Apple for analysis. 

Depending on the device and the language that is to be recognized, iOS may transparently decide to transcribe the audio on the device itself or, if local speech recognition is not available on the device, iOS will use Apple's servers to do the job. 

This is why an active internet connection is usually required for speech recognition. I'll show you how to check the availability of the service very soon.

There are three steps to use speech recognition:

  • Explain: tell your user why you want to access their voice.
  • Authorize: explicitly ask authorization to access their voice.
  • Request: load a pre-recorded audio from disk using SFSpeechURLRecognitionRequest, or stream live audio using SFSpeechAudioBufferRecognitionRequest and process the transcription.

If you want to know more about the Speech framework, watch WWDC 2016 Session 509. You can also read the official documentation.

Example

I will now show you how to build a real-life app that takes advantage of the speech recognition API. We are going to build a small flight-tracking app in which the user can simply say a flight number, and the app will show the current status of the flight. Yes, we're going to build a small assistant like Siri to check the status of any flight!

In the tutorial's GitHub repo, I've provided a skeleton project that contains a basic UI that will help us for this tutorial. Download and open the project in Xcode 8.2 or higher. Starting with an existing UI will let us focus on the speech recognition API.

Take a look at the classes in the project. UIViewController+Style.swift contains most of the code responsible for updating the UI. The example datasource of the flights displayed in the table is declared in FlightsDataSource.swift.

If you run the project, it should look like the following.

The initial example project

After the user presses the microphone button, we want to start the speech recognition to transcribe the flight number. So if the user says "LX40", we would like to show the information regarding the gate and current status of the flight. To do this, we will call a function to automatically look up the flight in a datasource and show the status of the flight. 

We are first going to explore how to transcribe from pre-recorded audio. Later on, we'll learn how to implement the more interesting live speech recognition.

Let's start by setting up the project. Open the Info.plist file and add a new row with the explanation that will be shown to the user when asked for permission to access their voice. The newly added row is highlighted in blue in the following image.

The Infoplist file with the newly added key

Once this is done, open ViewController.swift. Don't mind the code that is already in this class; it is only taking care of updating the UI for us.

The first step with any new framework that you want to use is to import it at the top of the file.

To show the permission dialog to the user, add this code in the viewDidLoad(animated:) method:

The status variable takes care of changing the UI to warn the user that speech recognition is not available in case something goes wrong. We are going to assign a new status to the same variable every time we would like to change the UI.

If the app hasn't asked the user for permission yet, the authorization status will be notDetermined, and we call the askSpeechPermission method to ask it as defined in the next step.

You should always fail gracefully if a specific feature is not available. It's also very important to always communicate to the user when you're recording their voice. Never try to recognize their voice without first updating the UI and making your user aware of it.

Here's the implementation of the function to ask the user for permission.

We invoke the requestAuthorization method to display the speech recognition privacy request that we added to the Info.plist. We then switch to the main thread in case the closure was invoked on a different thread—we want to update the UI only from the main thread. We assign the new status to update the microphone button to signal to the user the availability (or not) of speech recognition.

Pre-Recorded Audio Recognition

Before writing the code to recognize pre-recorded audio, we need to find the URL of the audio file. In the project navigator, check that you have a file named LX40.m4a. I recorded this file myself with the Voice Memos app on my iPhone by saying "LX40". We can easily check if we get a correct transcription of the audio.

Store the audio file URL in a property:

It's time to finally see the power and simplicity of the Speech framework. This is the code that does all the speech recognition for us:

Here is what this method is doing:

  • Initialize a SFSpeechRecognizer instance and check that the speech recognition is available with a guard statement. If it's not available, we simply set the status to unavailable and return. (The default initializer uses the default user locale, but you can also use the SFSpeechRecognizer(locale:) initializer to provide a different locale.)
  • If speech recognition is available, create a SFSpeechURLRecognitionRequest instance by passing the pre-recorded audio URL.
  • Start the speech recognition by invoking the recognitionTask(with:) method with the previously created request.

The closure will be called multiple times with two parameters: a result and an error object. 

The recognizer is actually playing the file and trying to recognize the text incrementally. For this reason, the closure is called multiple times. Every time it recognizes a letter or word or it makes some corrections, the closure is invoked with up-to-date objects. 

The result object has the isFinal property set to true when the audio file was completely analyzed. In this case, we start a search in our flight datasource to see if we can find a flight with the recognized flight number. The searchFlight function will take care of displaying the result.

The last thing that we are missing is to invoke the recognizeFile(url:) function when the microphone button is pressed:

Run the app on your device running iOS 10, press the microphone button, and you'll see the result. The audio "LX40" is incrementally recognized, and the flight status is displayed!

 

Tip: The flight number is displayed in a UITextView. As you may have noticed, if you enable the Flight Number data detector in the UITextView, you can press on it and the current status of the flight will actually be displayed!

The complete example code up to this point can be viewed in the pre-recorded-audio branch in GitHub.

Live Audio Recognition

Let's now see how to implement live speech recognition. It's going to be a little bit more complicated compared to what we just did. You can once again download the same skeleton project and follow along.

We need a new key in the Info.plist file to explain to the user why we need access to the microphone. Add a new row to your Info.plist as shown in the image.

New row explaining why we need to access the microphone

We don't need to manually ask the user for permission because iOS will do that for us as soon as we try to access any microphone-related API.

We can reuse the same code that we used in the previous section (remember to import Speech) to ask for the authorization. The viewDidLoad(animated:) method is implemented exactly as before:

Also, the method to ask the user for permission is the same.

The implementation of startRecording is going to be a little bit different. Let's first add a few new instance variables that will come in handy while managing the audio session and speech recognition task.

Let's take a look at each variable separately:

  • AVAudioEngine is used to process an audio stream. We will create an audio node and attach it to this engine so that we can get updated when the microphone receives some audio signals.
  • SFSpeechRecognizer is the same class we have seen in the previous part of the tutorial, and it takes care of recognizing the speech. Given that the initializer can fail and return nil, we declare it as optional to avoid crashing at runtime.
  • SFSpeechAudioBufferRecognitionRequest is a buffer used to recognize the live speech. Given that we don't have the complete audio file as we did before, we need a buffer to allocate the speech as the user speaks.
  • SFSpeechRecognitionTask manages the current speech recognition task and can be used to stop or cancel it.

Once we have declared all the required variables, let's implement startRecording.

This is the core code of our feature. I will explain it step by step:

  • First we get the inputNode of the audioEngine. A device can possibly have multiple audio inputs, and here we select the first one.
  • We tell the input node that we want to monitor the audio stream. The block that we provide will be invoked upon every received audio stream of 1024 bytes. We immediately append the audio buffer to the request so that it can start the recognition process.
  • We prepare the audio engine to start recording. If the recording starts successfully, set the status to .recognizing so that we update the button icon to let the user know that their voice is being recorded.
  • Let's assign the returned object from speechRecognizer.recognitionTask(with:resultHandler:) to the recognitionTask variable. If the recognition is successful, we search the flight in our datasource and update the UI. 

The function to cancel the recording is as simple as stopping the audio engine, removing the tap from the input node, and cancelling the recognition task.

We now only need to start and stop the recording. Modify the microphonePressed method as follows:

Depending on the current status, we start or stop the speech recognition.

Build and run the app to see the result. Try to spell any of the listed flight numbers and you should see its status appear.

 

Once again, the example code can be viewed in the live-audio branch on GitHub.

Best Practices

Speech recognition is a very powerful API that Apple provided to iOS developers targeting iOS 10. It is completely free to use, but keep in mind that it's not unlimited in usage. It is limited to about one minute for each speech recognition task, and your app may also be throttled by Apple's servers if it requires too much computation. For these reasons, it has a high impact on network traffic and power usage. 

Make sure that your users are properly instructed on how to use speech recognition, and be as transparent as possible when you are recording their voice. 

Recap

In this tutorial, you have seen how to use fast, accurate and flexible speech recognition in iOS 10. Use it to your own advantage to give your users a new way of interacting with your app and improve its accessibility at the same time. 

If you want to learn more about integrating Siri in your app, or if you want to find out about some of the other cool developer features of iOS 10, check out Markus Mühlberger's course.

Also, check out some of our other free tutorials on iOS 10 features.


2017-01-25T19:29:15.000Z2017-01-25T19:29:15.000ZPatrick Balestra

Swift From Scratch: Introduction

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-22598


In 2014, Apple took the developer community by surprise with the introduction of Swift, a brand new programming language. Swift has come a long way, and it is hard to believe that the language is celebrating its third anniversary this year. A few months ago, Apple released Swift 3, a major milestone for the language. In this series, I'll teach you the fundamentals of Swift 3.

Swift feels familiar if you have used Objective-C to develop iOS or macOS applications, but there are a number of important differences. I'll kick this series off by showing you in what ways Swift differs from Objective-C and why those differences are a good thing. Let's get started.

1. Prerequisites

Programming

Throughout this series, I make references to Objective-C and compare the Swift programming language with Objective-C. However, to follow along, there is no need to be familiar with Objective-C.

That said, it is important that you have experience with a programming language. While this series focuses on Swift, it doesn't cover the basics of programming. I expect you to be familiar with variables, constants, functions, control flow, and object-oriented programming.

If you are familiar with Objective-C, Java, Ruby, PHP, or JavaScript, then you won't have problems understanding the concepts explained in this series. As a matter of fact, you will notice that Swift shares similarities with a number of popular programming languages, including Objective-C.

Xcode

Swift 3 is only supported by Xcode 8, and you need to install the latest version of Apple's IDE (Integrated Development Environment) to follow along. You can download Xcode either from the App Store or Apple's Developer Center.

2. Swift

Compared to Objective-C or Java, Swift is an expressive, succinct language that often reminds me of Ruby and JavaScript. Even though the creator of Swift, Chris Lattner, took inspiration from other languages, Swift is very much a language that stands on its own feet.

As you may know, Objective-C is a strict superset of C. Swift, however, is not. While Swift uses curly braces and shares several keywords with the C programming language, it is not compatible with C.

Swift is a modern programming language that feels intuitive, especially if you are used to Java or C-based programming languages like Objective-C. During the development and design of Swift, Chris Lattner focused on a number of key characteristics that ended up defining the language.

Safety

Safety is one of Swift's foundations. In this series, you quickly learn that Swift is very different from Objective-C in terms of safety, and this directly affects the code you write. If you have worked with Objective-C, this takes some getting used to.

LLVM

Chris Lattner also designed the LLVM (Low Level Virtual Machine) compiler, and it shouldn't be a surprise that Swift is built with the LLVM compiler. The result is speed, power, and reliability. Swift is significantly faster than Objective-C in most scenarios. Read this article by Jesse Squires if you are interested in the nitty-gritty details.

Type Inference

Type safety is one of Swift's key features. Swift inspects your code at compile time and warns you about type mismatches. This means that you can catch errors early, avoiding a range of common bugs.

Luckily, Swift helps you with this. Swift is often smart enough to infer the type of variables and constants, which means that you don't have to explicitly declare the type of each variable or constant. In the following code snippet, we declare a variable a and assign it the value "this is a string". Swift is smart enough to infer that a is of type String.

This is a trivial example, but you'll find out that Swift can also handle more complex scenarios.

Variables and Constants

Constants are useful in C and Objective-C, but most developers use them sparingly. In Swift, constants are just as important and common as variables. If the value of a variable doesn't change, then that variable should be a constant. Variables are declared using the var keyword. Constants are declared using the let keyword.

Not only does this improve the intent, it also improves safety by ensuring that the variable's value is not accidentally changed. We take a closer look at variables and constants later in this tutorial.

Semicolons

In Swift, semicolons are not required. You can use semicolons, for example, to write multiple statements on the same line, but they are optional. Take a look at the following example to better understand the use of semicolons in Swift.

We are only scratching the surface. You'll learn about many more features and concepts throughout this series. Instead of overloading you with more theory, I suggest you get your feet wet by writing some code. This brings us to one of the best features of Swift and Xcode: playgrounds.

3. Playgrounds

Apple introduced playgrounds in Xcode 6. Playgrounds are the perfect tool for learning Swift. A playground is an interactive environment in which you can write Swift and immediately see the result. Not only does it make learning Swift more fun, it is much faster and more intuitive than setting up a project in Xcode.

As a matter of fact, it is so easy that you might as well jump in and create your first playground. Open Xcode 8 and select New > Playground... from the File menu. Name the playground and set Platform to iOS.

Create a New Playground

Tell Xcode where you would like to save the playground and click Create. Instead of creating a project with a bunch of files and folders, a playground is nothing more than a file with a .playground extension. A playground is more than a file under the hood, but that isn't something we need to worry about for now.

The user interface you are presented with couldn't be simpler. On the left, you see a code editor with a comment at the top, an import statement for importing the UIKit framework, and one line of code that shouldn't be too difficult to understand. On the right, you see the output or results generated by the code on the left.

The User Interface of a Playground in Xcode

Let's take a moment to understand the code in your new playground. The first line should look familiar if you have worked with Objective-C, PHP, or JavaScript. Comments in Swift start with two forward slashes or, in the case of multiline comments, start with /* and end with */.

Because we selected iOS as the platform when we created the playground, Xcode added an import statement for the UIKit framework. This gives us access to every symbol defined in the UIKit framework.

The third line looks familiar, but there are a few details that need clarifying. We declare a variable str and assign it a string. This line of code is easy to understand, but note that the variable's name is preceded by the var keyword instead of the variable's type as you would expect in Objective-C. The same statement in Objective-C would look something like this.

In Objective-C, we would replace the var keyword with the variable's type, prefix the string with an @ symbol, and end the statement with a semicolon. It is important to understand that the var keyword doesn't replace the type specifier in Objective-C. It is nothing more than a keyword to indicate that str is a variable, not a constant. 

Let me explain this in more detail. Add the following line of code to the playground.

The let keyword tells the compiler that hello is a constant, not a variable. Both str and hello are of type String, but str is a variable and hello is a constant. The difference is simple to understand if we add two more lines of code.

Assigning a new value to str doesn't pose a problem. Assigning a new value to hello, however, results in an error. Xcode tells us that it cannot assign a new value to hello, because hello is a constant, not a variable. This is another key feature of Swift, which will take some getting used to.

The value of a constant cannot be changed

The idea is simple. If the value of a variable is not going to change, then it should be a constant instead of a variable. While this may seem like a semantic detail, I guarantee that it makes your code safer and less prone to errors. Be prepared, because you are going to see the let keyword a lot in this series.

We use playgrounds extensively throughout this series because it is a great way to learn the language. There are a few other powerful playground features that we haven't covered yet, but we first need to understand the basics of the Swift language before we can benefit from those.

Conclusion

I still have to meet a developer who doesn't like Swift, and that's saying something. Swift has a number of concepts that require some getting used to, but I am confident that you too will end up enjoying Swift and appreciating its power, elegance, and concision. In the next installment of this series, we'll start exploring the basics of Swift.

If you want to get up and running with the Swift language quickly, check out our course on creating iOS apps with Swift.

Or check out some of our other tutorials and courses on Swift and iOS development!

2017-01-23T20:03:31.218Z2017-01-23T20:03:31.218ZBart Jacobs

Using the Speech Recognition API in iOS 10

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-28032
Final product image
What You'll Be Creating

Introduction

Siri has been a core feature of iOS since it was introduced back in 2011. Now, iOS 10 brings new features to allow developers to interact with Siri. In particular, two new frameworks are now available: Speech and SiriKit. 

Today, we are going to take a look at the Speech framework, which allows us to easily translate audio into text. You'll learn how to build a real-life app that uses the speech recognition API to check the status of a flight.

If you want to learn more about SiriKit, I covered it in my Create SiriKit Extensions in iOS 10 tutorial. For more on the other new features for developers in iOS 10, check out Markus Mühlberger's course, right here on Envato Tuts+.

Usage

Speech recognition is the process of translating live or pre-recorded audio to transcribed text. Since Siri was introduced in iOS 5, there has been a microphone button in the system keyboard that enables users to easily dictate. This feature can be used with any UIKit text input, and it doesn't require you to write additional code beyond what you would write to support a standard text input. It's really fast and easy to use, but it comes with a few limitations:

  • The keyboard is always present when dictating.
  • The language cannot be customized by the app itself.
  • The app cannot be notified when dictation starts and finishes.
Dictation in the iOS keyboard

To allow developers to build more customizable and powerful applications with the same dictation technology as Siri, Apple created the Speech framework. It allows every device that runs iOS 10 to translate audio to text in over 50 languages and dialects.

This new API is much more powerful because it doesn't just provide a simple transcription service, but it also provides alternative interpretations of what the user may have said. You can control when to stop a dictation, you can show results as your user speaks, and the speech recognition engine will automatically adapt to the user preferences (language, vocabulary, names, etc.). 

An interesting feature is support for transcribing pre-recorded audio. If you are building an instant messaging app, for example, you could use this functionality to transcribe the text of new audio messages.

Setup

First of all, you will need to ask the user for permission to transmit their voice to Apple for analysis. 

Depending on the device and the language that is to be recognized, iOS may transparently decide to transcribe the audio on the device itself or, if local speech recognition is not available on the device, iOS will use Apple's servers to do the job. 

This is why an active internet connection is usually required for speech recognition. I'll show you how to check the availability of the service very soon.

There are three steps to use speech recognition:

  • Explain: tell your user why you want to access their voice.
  • Authorize: explicitly ask authorization to access their voice.
  • Request: load a pre-recorded audio from disk using SFSpeechURLRecognitionRequest, or stream live audio using SFSpeechAudioBufferRecognitionRequest and process the transcription.

If you want to know more about the Speech framework, watch WWDC 2016 Session 509. You can also read the official documentation.

Example

I will now show you how to build a real-life app that takes advantage of the speech recognition API. We are going to build a small flight-tracking app in which the user can simply say a flight number, and the app will show the current status of the flight. Yes, we're going to build a small assistant like Siri to check the status of any flight!

In the tutorial's GitHub repo, I've provided a skeleton project that contains a basic UI that will help us for this tutorial. Download and open the project in Xcode 8.2 or higher. Starting with an existing UI will let us focus on the speech recognition API.

Take a look at the classes in the project. UIViewController+Style.swift contains most of the code responsible for updating the UI. The example datasource of the flights displayed in the table is declared in FlightsDataSource.swift.

If you run the project, it should look like the following.

The initial example project

After the user presses the microphone button, we want to start the speech recognition to transcribe the flight number. So if the user says "LX40", we would like to show the information regarding the gate and current status of the flight. To do this, we will call a function to automatically look up the flight in a datasource and show the status of the flight. 

We are first going to explore how to transcribe from pre-recorded audio. Later on, we'll learn how to implement the more interesting live speech recognition.

Let's start by setting up the project. Open the Info.plist file and add a new row with the explanation that will be shown to the user when asked for permission to access their voice. The newly added row is highlighted in blue in the following image.

The Infoplist file with the newly added key

Once this is done, open ViewController.swift. Don't mind the code that is already in this class; it is only taking care of updating the UI for us.

The first step with any new framework that you want to use is to import it at the top of the file.

To show the permission dialog to the user, add this code in the viewDidLoad(animated:) method:

The status variable takes care of changing the UI to warn the user that speech recognition is not available in case something goes wrong. We are going to assign a new status to the same variable every time we would like to change the UI.

If the app hasn't asked the user for permission yet, the authorization status will be notDetermined, and we call the askSpeechPermission method to ask it as defined in the next step.

You should always fail gracefully if a specific feature is not available. It's also very important to always communicate to the user when you're recording their voice. Never try to recognize their voice without first updating the UI and making your user aware of it.

Here's the implementation of the function to ask the user for permission.

We invoke the requestAuthorization method to display the speech recognition privacy request that we added to the Info.plist. We then switch to the main thread in case the closure was invoked on a different thread—we want to update the UI only from the main thread. We assign the new status to update the microphone button to signal to the user the availability (or not) of speech recognition.

Pre-Recorded Audio Recognition

Before writing the code to recognize pre-recorded audio, we need to find the URL of the audio file. In the project navigator, check that you have a file named LX40.m4a. I recorded this file myself with the Voice Memos app on my iPhone by saying "LX40". We can easily check if we get a correct transcription of the audio.

Store the audio file URL in a property:

It's time to finally see the power and simplicity of the Speech framework. This is the code that does all the speech recognition for us:

Here is what this method is doing:

  • Initialize a SFSpeechRecognizer instance and check that the speech recognition is available with a guard statement. If it's not available, we simply set the status to unavailable and return. (The default initializer uses the default user locale, but you can also use the SFSpeechRecognizer(locale:) initializer to provide a different locale.)
  • If speech recognition is available, create a SFSpeechURLRecognitionRequest instance by passing the pre-recorded audio URL.
  • Start the speech recognition by invoking the recognitionTask(with:) method with the previously created request.

The closure will be called multiple times with two parameters: a result and an error object. 

The recognizer is actually playing the file and trying to recognize the text incrementally. For this reason, the closure is called multiple times. Every time it recognizes a letter or word or it makes some corrections, the closure is invoked with up-to-date objects. 

The result object has the isFinal property set to true when the audio file was completely analyzed. In this case, we start a search in our flight datasource to see if we can find a flight with the recognized flight number. The searchFlight function will take care of displaying the result.

The last thing that we are missing is to invoke the recognizeFile(url:) function when the microphone button is pressed:

Run the app on your device running iOS 10, press the microphone button, and you'll see the result. The audio "LX40" is incrementally recognized, and the flight status is displayed!

 

Tip: The flight number is displayed in a UITextView. As you may have noticed, if you enable the Flight Number data detector in the UITextView, you can press on it and the current status of the flight will actually be displayed!

The complete example code up to this point can be viewed in the pre-recorded-audio branch in GitHub.

Live Audio Recognition

Let's now see how to implement live speech recognition. It's going to be a little bit more complicated compared to what we just did. You can once again download the same skeleton project and follow along.

We need a new key in the Info.plist file to explain to the user why we need access to the microphone. Add a new row to your Info.plist as shown in the image.

New row explaining why we need to access the microphone

We don't need to manually ask the user for permission because iOS will do that for us as soon as we try to access any microphone-related API.

We can reuse the same code that we used in the previous section (remember to import Speech) to ask for the authorization. The viewDidLoad(animated:) method is implemented exactly as before:

Also, the method to ask the user for permission is the same.

The implementation of startRecording is going to be a little bit different. Let's first add a few new instance variables that will come in handy while managing the audio session and speech recognition task.

Let's take a look at each variable separately:

  • AVAudioEngine is used to process an audio stream. We will create an audio node and attach it to this engine so that we can get updated when the microphone receives some audio signals.
  • SFSpeechRecognizer is the same class we have seen in the previous part of the tutorial, and it takes care of recognizing the speech. Given that the initializer can fail and return nil, we declare it as optional to avoid crashing at runtime.
  • SFSpeechAudioBufferRecognitionRequest is a buffer used to recognize the live speech. Given that we don't have the complete audio file as we did before, we need a buffer to allocate the speech as the user speaks.
  • SFSpeechRecognitionTask manages the current speech recognition task and can be used to stop or cancel it.

Once we have declared all the required variables, let's implement startRecording.

This is the core code of our feature. I will explain it step by step:

  • First we get the inputNode of the audioEngine. A device can possibly have multiple audio inputs, and here we select the first one.
  • We tell the input node that we want to monitor the audio stream. The block that we provide will be invoked upon every received audio stream of 1024 bytes. We immediately append the audio buffer to the request so that it can start the recognition process.
  • We prepare the audio engine to start recording. If the recording starts successfully, set the status to .recognizing so that we update the button icon to let the user know that their voice is being recorded.
  • Let's assign the returned object from speechRecognizer.recognitionTask(with:resultHandler:) to the recognitionTask variable. If the recognition is successful, we search the flight in our datasource and update the UI. 

The function to cancel the recording is as simple as stopping the audio engine, removing the tap from the input node, and cancelling the recognition task.

We now only need to start and stop the recording. Modify the microphonePressed method as follows:

Depending on the current status, we start or stop the speech recognition.

Build and run the app to see the result. Try to spell any of the listed flight numbers and you should see its status appear.

 

Once again, the example code can be viewed in the live-audio branch on GitHub.

Best Practices

Speech recognition is a very powerful API that Apple provided to iOS developers targeting iOS 10. It is completely free to use, but keep in mind that it's not unlimited in usage. It is limited to about one minute for each speech recognition task, and your app may also be throttled by Apple's servers if it requires too much computation. For these reasons, it has a high impact on network traffic and power usage. 

Make sure that your users are properly instructed on how to use speech recognition, and be as transparent as possible when you are recording their voice. 

Recap

In this tutorial, you have seen how to use fast, accurate and flexible speech recognition in iOS 10. Use it to your own advantage to give your users a new way of interacting with your app and improve its accessibility at the same time. 

If you want to learn more about integrating Siri in your app, or if you want to find out about some of the other cool developer features of iOS 10, check out Markus Mühlberger's course.

Also, check out some of our other free tutorials on iOS 10 features.


2017-01-25T19:29:15.000Z2017-01-25T19:29:15.000ZPatrick Balestra

Swift From Scratch: Variables and Constants

$
0
0

In the first article of Swift From Scratch, you learned about Xcode playgrounds and wrote your first lines of Swift. In this article, we'll start learning the fundamentals of the Swift programming language by exploring variables and typing. We'll also take a close look at constants and why you're encouraged to use them as much as possible.

In the next installments of this series, we're going to make use of Xcode playgrounds to learn the fundamentals of the Swift programming language. As we saw in the previous article, playgrounds are ideal for teaching and learning Swift.

Let's start by creating a new playground for this tutorial. I encourage you to follow along! Using a language is a great way to learn its syntax and understand its concepts.

Launch Xcode 8 and create a new playground by selecting New > Playground... from Xcode's File menu. Enter a name for the playground, set Platform to iOS, and click Next. Tell Xcode where you'd like to save the playground and hit Create.

Create a New Playground

Clear the contents of the playground so we can start with a clean slate. We've already made use of variables in the previous tutorial, but let's now take a closer look at the nitty-gritty details to better understand what Swift is doing behind the scenes.

1. Variables

Declaring Variables

In Swift, we use the var keyword to declare a variable. While this is similar to how variables are declared in other programming languages, I strongly advise you not to think about other programming languages when using the var keyword in Swift. There are a few important differences.

The var keyword is the only way to declare a variable in Swift. The most common and concise use of the var keyword is to declare a variable and assign a value to it.

Remember that we don't end this line of code with a semicolon. While a semicolon is optional in Swift, the best practice is not to use a semicolon if it isn't required.

You also may have noticed that we didn't specify a type when declaring the variable street. This brings us to one of Swift's key features, type inference.

Type Inference

The above statement declares a variable street and assigns the value 5th Avenue to it. If you're new to Swift or you're used to JavaScript or PHP, then you may be thinking that Swift is a typeless or loosely typed language, but nothing could be further from the truth. Let me reiterate that Swift is a strongly typed language. Type safety is one of the cornerstones of the language.

We're just getting started, and Swift already shows us a bit of its magic. Even though the above statement doesn't explicitly specify a type, the variable street is of type String. This is Swift's type inference in action. The value we assign to street is a string. Swift is smart enough to see that and implicitly sets the type of street to String.

The following statement gives us the same result. The difference is that we explicitly set the type of the variable. This statement literally says that street is of type String.

Swift requires you to explicitly or implicitly set the type of variables and constants. If you don't, Swift complains by throwing an error. Add the following line to your playground to see what I mean.

This statement would be perfectly valid in JavaScript or PHP. In Swift, however, it is invalid. The reason is simple. Even though we declare a variable using the var keyword, we don't specify the variable's type. Swift is unable to infer the type since we don't assign a value to the variable. If you click the error, Xcode tells you what is wrong with this statement.

Type Annotation Missing In Pattern

We can easily fix this issue by doing one of two things. We can assign a value to the variable as we did earlier, or we can explicitly specify a type for the variable number. When we explicitly set the type of number, the error disappears. The below line of code reads that number is of type String.

Changing Type

As you can see below, assigning new values to street and number is simple and comes with no surprises.

Wouldn't it be easier to assign the number 10 to the number variable? There's no need to store the street number as a string. Let's see what happens if we do.

If we assign an integer to number, Xcode throws another error at us. The error message is clear. We cannot assign a value of type Int to a variable of type String. This isn't a problem in loosely typed languages, but it is in Swift.

Swift is a strongly typed language in which every variable has a specific type, and that type cannot be changed. Read this sentence again to let it sink in because this is an important concept in Swift.

To get rid of the error, we need to declare the number variable as an Int. Take a look at the updated example below.

Summary

It's important that you keep the following in mind. You can declare a variable with the var keyword, and you don't need to explicitly declare the variable's type. However, remember that every variable—and constant—has a type in Swift. If Swift can't infer the type, then it complains. Every variable has a type, and that type cannot be changed.

2. Constants

Constants are similar to variables in terms of typing. The only difference is that the value of a constant cannot be changed once it has a value. The value of a constant is, well... constant.

Declaring Constants

To declare a constant, you use the let keyword. Take a look at the following example in which we declare street as a constant instead of a variable.

If we only update the first line, replacing var with let, Xcode throws an error for obvious reasons. Trying to change the value of a constant is not allowed in Swift. Remove or comment out the line in which we try to assign a new value to street to get rid of the error.

Using Constants

I hope you agree that declaring constants is very easy in Swift. There's no need for exotic keywords or a complex syntax. Declaring constants is just as easy as declaring variables, and that's no coincidence.

The use of constants is encouraged in Swift. If a value isn't going to change or you don't expect it to change, then it should be a constant. This has a number of benefits. One of the benefits is performance, but a more important benefit is safety. By using constants whenever possible, you add constraints to your code, which results in safer code.

3. Data Types

Most programming languages have a wide range of types for storing strings, integers, floats, etc. The list of available types in Swift is concise. Take a look at the following list:

  • Int
  • Float
  • Double
  • String
  • Character
  • Bool

It's important to understand that the above types are not basic or primitive types. They are named types, which are implemented by Swift using structures. We explore structures in more detail later in this series, but it's good to know that the types we encountered so far are not the same as the primitive types you may have used in, for example, Objective-C.

There are many more data types in Swift, such as tuples, arrays, and dictionaries. We explore those later in this series.

4. Type Conversion

There is one more topic that we need to discuss, type conversion. Take a look at the following Objective-C snippet. This code block outputs the value 314.000000 to the console.

The Objective-C runtime implicitly converts a to a floating point value and multiplies it with b. Let's rewrite the above code snippet using Swift.

Ignore the print(_:separator:terminator:) function for now. I first want to focus on the multiplication of a and b. Swift infers the type of a, an Int, and b, a Double. However, when the compiler attempts to multiply a and b, it notices that they are not of the same type. This may not seem like a problem to you, but it is for Swift. Swift doesn't know what type the result of this multiplication should be. Should it be an integer or a double?

To fix this issue, we need to make sure both operands of the multiplication are of the same type. Swift doesn't implicitly convert the operands for us, but it is easy to do so. In the updated example below, we create a Double using the value stored in a. This resolves the error.

Note that the type of a hasn't changed. Even though the above code snippet may look like type casting, it is not the same thing. We use the value stored in a to create a Double, and that result is used in the multiplication with b. The result of the multiplication is of type Double.

What you need to remember is that Swift is different from C and Objective-C. It doesn't implicitly convert values of variables and constants. This is another important concept to grasp.

5. Print

In the previous code snippet, you invoked your first function, print(_:separator:terminator:). This function is similar to Objective-C's NSLog; it prints something and appends a new line. To print something to the console or the results panel on the right, you invoke print(_:separator:terminator:) and pass it a parameter. That can be a variable, a constant, an expression, or a literal. Take a look at the following examples.

It's also possible to use string interpolation to combine variables, constants, expressions, and literals. String interpolation is very easy in Swift. Wrap the variables, constants, expressions, or literals in \(). Easy as pie.

Conclusion

It is key that you understand how Swift handles variables and constants. While it may take some time to get used to the concept of constants, once you've embraced this best practice, your code will be much safer and easier to understand. In the next tutorial of this series, we'll continue our exploration of Swift by looking at collections.

If you want to learn how to use Swift 3 to code real-world apps, check out our course Create iOS Apps With Swift 3. Whether you're new to iOS app development or are looking to make the switch from Objective-C, this course will get you started with Swift for app development. 

 


2017-01-27T16:32:04.000Z2017-01-27T16:32:04.000ZBart Jacobs

Swift From Scratch: Variables and Constants

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-22828

In the first article of Swift From Scratch, you learned about Xcode playgrounds and wrote your first lines of Swift. In this article, we'll start learning the fundamentals of the Swift programming language by exploring variables and typing. We'll also take a close look at constants and why you're encouraged to use them as much as possible.

In the next installments of this series, we're going to make use of Xcode playgrounds to learn the fundamentals of the Swift programming language. As we saw in the previous article, playgrounds are ideal for teaching and learning Swift.

Let's start by creating a new playground for this tutorial. I encourage you to follow along! Using a language is a great way to learn its syntax and understand its concepts.

Launch Xcode 8 and create a new playground by selecting New > Playground... from Xcode's File menu. Enter a name for the playground, set Platform to iOS, and click Next. Tell Xcode where you'd like to save the playground and hit Create.

Create a New Playground

Clear the contents of the playground so we can start with a clean slate. We've already made use of variables in the previous tutorial, but let's now take a closer look at the nitty-gritty details to better understand what Swift is doing behind the scenes.

1. Variables

Declaring Variables

In Swift, we use the var keyword to declare a variable. While this is similar to how variables are declared in other programming languages, I strongly advise you not to think about other programming languages when using the var keyword in Swift. There are a few important differences.

The var keyword is the only way to declare a variable in Swift. The most common and concise use of the var keyword is to declare a variable and assign a value to it.

Remember that we don't end this line of code with a semicolon. While a semicolon is optional in Swift, the best practice is not to use a semicolon if it isn't required.

You also may have noticed that we didn't specify a type when declaring the variable street. This brings us to one of Swift's key features, type inference.

Type Inference

The above statement declares a variable street and assigns the value 5th Avenue to it. If you're new to Swift or you're used to JavaScript or PHP, then you may be thinking that Swift is a typeless or loosely typed language, but nothing could be further from the truth. Let me reiterate that Swift is a strongly typed language. Type safety is one of the cornerstones of the language.

We're just getting started, and Swift already shows us a bit of its magic. Even though the above statement doesn't explicitly specify a type, the variable street is of type String. This is Swift's type inference in action. The value we assign to street is a string. Swift is smart enough to see that and implicitly sets the type of street to String.

The following statement gives us the same result. The difference is that we explicitly set the type of the variable. This statement literally says that street is of type String.

Swift requires you to explicitly or implicitly set the type of variables and constants. If you don't, Swift complains by throwing an error. Add the following line to your playground to see what I mean.

This statement would be perfectly valid in JavaScript or PHP. In Swift, however, it is invalid. The reason is simple. Even though we declare a variable using the var keyword, we don't specify the variable's type. Swift is unable to infer the type since we don't assign a value to the variable. If you click the error, Xcode tells you what is wrong with this statement.

Type Annotation Missing In Pattern

We can easily fix this issue by doing one of two things. We can assign a value to the variable as we did earlier, or we can explicitly specify a type for the variable number. When we explicitly set the type of number, the error disappears. The below line of code reads that number is of type String.

Changing Type

As you can see below, assigning new values to street and number is simple and comes with no surprises.

Wouldn't it be easier to assign the number 10 to the number variable? There's no need to store the street number as a string. Let's see what happens if we do.

If we assign an integer to number, Xcode throws another error at us. The error message is clear. We cannot assign a value of type Int to a variable of type String. This isn't a problem in loosely typed languages, but it is in Swift.

Swift is a strongly typed language in which every variable has a specific type, and that type cannot be changed. Read this sentence again to let it sink in because this is an important concept in Swift.

To get rid of the error, we need to declare the number variable as an Int. Take a look at the updated example below.

Summary

It's important that you keep the following in mind. You can declare a variable with the var keyword, and you don't need to explicitly declare the variable's type. However, remember that every variable—and constant—has a type in Swift. If Swift can't infer the type, then it complains. Every variable has a type, and that type cannot be changed.

2. Constants

Constants are similar to variables in terms of typing. The only difference is that the value of a constant cannot be changed once it has a value. The value of a constant is, well... constant.

Declaring Constants

To declare a constant, you use the let keyword. Take a look at the following example in which we declare street as a constant instead of a variable.

If we only update the first line, replacing var with let, Xcode throws an error for obvious reasons. Trying to change the value of a constant is not allowed in Swift. Remove or comment out the line in which we try to assign a new value to street to get rid of the error.

Using Constants

I hope you agree that declaring constants is very easy in Swift. There's no need for exotic keywords or a complex syntax. Declaring constants is just as easy as declaring variables, and that's no coincidence.

The use of constants is encouraged in Swift. If a value isn't going to change or you don't expect it to change, then it should be a constant. This has a number of benefits. One of the benefits is performance, but a more important benefit is safety. By using constants whenever possible, you add constraints to your code, which results in safer code.

3. Data Types

Most programming languages have a wide range of types for storing strings, integers, floats, etc. The list of available types in Swift is concise. Take a look at the following list:

  • Int
  • Float
  • Double
  • String
  • Character
  • Bool

It's important to understand that the above types are not basic or primitive types. They are named types, which are implemented by Swift using structures. We explore structures in more detail later in this series, but it's good to know that the types we encountered so far are not the same as the primitive types you may have used in, for example, Objective-C.

There are many more data types in Swift, such as tuples, arrays, and dictionaries. We explore those later in this series.

4. Type Conversion

There is one more topic that we need to discuss, type conversion. Take a look at the following Objective-C snippet. This code block outputs the value 314.000000 to the console.

The Objective-C runtime implicitly converts a to a floating point value and multiplies it with b. Let's rewrite the above code snippet using Swift.

Ignore the print(_:separator:terminator:) function for now. I first want to focus on the multiplication of a and b. Swift infers the type of a, an Int, and b, a Double. However, when the compiler attempts to multiply a and b, it notices that they are not of the same type. This may not seem like a problem to you, but it is for Swift. Swift doesn't know what type the result of this multiplication should be. Should it be an integer or a double?

To fix this issue, we need to make sure both operands of the multiplication are of the same type. Swift doesn't implicitly convert the operands for us, but it is easy to do so. In the updated example below, we create a Double using the value stored in a. This resolves the error.

Note that the type of a hasn't changed. Even though the above code snippet may look like type casting, it is not the same thing. We use the value stored in a to create a Double, and that result is used in the multiplication with b. The result of the multiplication is of type Double.

What you need to remember is that Swift is different from C and Objective-C. It doesn't implicitly convert values of variables and constants. This is another important concept to grasp.

5. Print

In the previous code snippet, you invoked your first function, print(_:separator:terminator:). This function is similar to Objective-C's NSLog; it prints something and appends a new line. To print something to the console or the results panel on the right, you invoke print(_:separator:terminator:) and pass it a parameter. That can be a variable, a constant, an expression, or a literal. Take a look at the following examples.

It's also possible to use string interpolation to combine variables, constants, expressions, and literals. String interpolation is very easy in Swift. Wrap the variables, constants, expressions, or literals in \(). Easy as pie.

Conclusion

It is key that you understand how Swift handles variables and constants. While it may take some time to get used to the concept of constants, once you've embraced this best practice, your code will be much safer and easier to understand. In the next tutorial of this series, we'll continue our exploration of Swift by looking at collections.

If you want to learn how to use Swift 3 to code real-world apps, check out our course Create iOS Apps With Swift 3. Whether you're new to iOS app development or are looking to make the switch from Objective-C, this course will get you started with Swift for app development. 

 


2017-01-27T16:32:04.000Z2017-01-27T16:32:04.000ZBart Jacobs

6 Do's and Don’ts for a Great Android User Experience

$
0
0

The most popular Android apps have something in common: they all provide a great user experience. In this post, I'll share some tips that will help your app stand out.

Regardless of the kind of app you have in mind, or your target audience, designing a great user experience can help to ensure your app is a success. In this article, I’m going to share six things you should, and should not do, to make sure your app delivers the best possible experience to your end-users.

Since creating and launching an Android app is a multi-step process, I’ll be touching on every part of the Android development lifecycle—from making tough decisions about which versions of Android you should support, to creating a product that’ll appeal to a global audience, right through to analyzing your app’s post-launch performance.

1. Don’t Get Hung Up on Trying to Support Every Version of Android

While it’s natural to want to get your app in front of as many users as possible, don’t fall into the trap of assuming that supporting more versions of Android is always the best approach.

The key to attracting a large audience is to provide the best possible user experience, and setting your sights on supporting as many versions of Android as possible can actually damage the overall user experience.

The major issue is that as you continue to travel back through Android’s release history, it’s going to become harder to make your app play nicely with the earlier releases.

Sometimes, there may be a clear point where your app becomes incompatible with earlier releases of Android. For example, if your app absolutely requires access to Bluetooth Low Energy (BLE), then your app isn’t going to be able to run on anything earlier than Android 4.3—the version where BLE support was added to the Android platform.

However, sometimes this line may not be quite so clear-cut, and you may find yourself debating whether to modify or even remove non-critical features, in order to create something that can run on a particular version of Android. Small compromises can gradually chip away at the quality of the user experience, so always take stock of the impact these changes will have on your users.

In addition, customizing, optimizing and testing your app for each different version of Android requires time and effort, so you’ll also need to ask yourself whether this investment is worth the potential rewards. Basically, how many more users could you potentially gain by supporting each version of Android? You can get an indication of how many Android devices are running each release of the Android platform, by checking out the stats at Google's Dashboard.

Ultimately, there’s no universal right or wrong answer, so you’ll need to weigh up the pros and cons and decide what makes the most sense for your particular project.

Once you’ve decided which versions of Android you’re going to support, add this information to your module-level build.gradle file using minSdkVersion (the lowest API your app is compatible with), targetSdkVersion (the highest API level you’ve tested your app against), and compileSdkVersion (the version of the Android SDK that Gradle should use to compile your app).

To make sure your app benefits from the latest Android features while remaining compatible with earlier releases, it’s recommended that you set your minSdkValue as low as possible, while setting targetSdkVersion and compileSdkVersion to the latest version of the Android SDK.

2. Do Design for Multiple Screens

When you’re working on an Android app, it’s not uncommon to spend most of your time testing that app on your own Android smartphone or tablet. Particularly during the early stages of app development, creating multiple Android Virtual Devices (AVDs) will probably be the last thing on your mind.

However, don’t lose sight of the bigger picture! It’s easy to get hung up on designing for the one screen that’s physically in front of you, but it’s essential that your app looks good and functions correctly across a wide range of Android devices.

The Android system will automatically scale your layouts, drawables and other resources so they render at the appropriate size for the current screen, but for the best user experience you should aim to create the illusion that your app was designed for the user’s specific device. Relying on auto-scaling alone isn’t going to cut it!

To make sure your app delivers the best user experience across a wide range of devices, you’ll need to provide alternate resources that are optimized for different devices, such as drawables that target Android’s generalized density buckets, and alternate layouts that are optimized for landscape mode.

Once you’ve created your alternate resources, you’ll need to create alternate directories that are tagged with the appropriate configuration qualifiers, and then place the resources inside these directories—for example, a res/layout-land directory would contain layouts that are designed for landscape orientation. The Android system will then automatically load the resource that best matches the current screen configuration at runtime.

While most configuration qualifiers are relatively straightforward, providing resources that target different screen sizes is a bit more complex, requiring you to specify the exact dpi value where the system should start using this resource. So, you basically need to tell the system, “I want to use this layout when my app is being displayed on devices with 800dpi or more available screen width.”

You arrive at these values by testing your app across a wide range of different AVDs and making a note of all the screen sizes that your default resources are struggling with—for example, maybe your default layout starts to look cluttered once the device falls below a certain dpi threshold.

There are three screen size configuration qualifiers that you can use in your projects:

  • smallestWidth sw<value>dp. Allows you to specify the minimum horizontal space that must be available before the system can use the resources in this directory. For example, if you had a set of layouts that require 800dpi or more, you’d create a res/layout-sw800dp directory. Note that a device’s smallestWidth is a fixed value that doesn’t change when the user switches their device between portrait and landscape orientation.

  • Available screen width w<value>dp. The minimum horizontal space that must be available before the system can use these resources. A device’s w<value>dp value does change when the user switches between portrait and landscape mode.

  • Available screen height: h<value>dp. The minimum height that must be available before the system can use these resources. The device’s h<value>dp value will change depending on whether the user is holding their device in landscape or portrait.

Designing for multiple screens is mostly about creating alternate versions of your project’s resources and adding them to the appropriate directories—then rinse and repeat. However, there are some additional tricks you can use when creating these alternate resources that can really help create the illusion that your app was designed for the user’s specific device:

  • Use density-specific drawables in combination with 9-patch images. If the system needs to resize an image to fit the current screen then by default it’ll resize the whole image, which can lead to blurry, pixelated or otherwise odd-looking images. For the best possible results, you should specify the exact pixels that the system should replicate if it needs to resize your image, by delivering your project’s drawables as 9-patch images. Provide multiple 9-patch versions of each drawable, where each 9-patch targets a different screen density, and the system will then load the 9-patch image that’s the best fit for the current screen density and stretch the 9-patch image’s ‘stretchable’ sections, if required. You can create 9-patch images using any PNG editor, or you can use the Draw 9-patch editor that’s included in the Android SDK (you’ll find it in sdk/tools/Draw9patch.bat).

  • Create multiple dimens.xml files. It’s recommended that you define your layout’s values in a separate dimens.xml file rather than hard-coding them into your project. However, you can take this one step further and create multiple dimens.xml files that target different screen sizes and densities. For example, you might create a values-ldpi/dimens.xml file where you define the values that your app should use when it’s installed on a device that falls into the ‘low’ density category. The system will then load the appropriate dimensions for the current device, and apply them to your layout.

  • Consider using fragments. Fragments provide you with a way of dividing a single Activity into separate components that you can then display in different ways, depending on the current screen configuration. For example, you may choose to display multiple fragments side-by-side in a multi-pane layout when your app is installed on a device with a larger screen, and as separate Activities when space is more limited. The simplest way to add a fragment to your layout is by inserting a <fragment> element into your layout resource file. Alternatively, you can add fragments to a layout via your application code—this method may be more complicated, but it gives you the added flexibility of being able to add, remove or replace fragments at runtime.

3. Do Consider Supporting Different Languages

Android is a global operating system, so if your app is going to provide the best user experience to this global audience then you should consider localizing your app for different languages, and potentially different regions.

Typically, the biggest part of localizing an app is translating your project’s strings.xml file into the different language(s) you want to support. Unless you happen to be fluent in your target languages, you’re going to need to enlist the help of a translator. If you don’t have anyone in mind, then the Developer Console’s Google Play App Translation services can point you in the direction of potential translators.

Youll find a Purchase Translations option in the Google Play Developer Console

Once you’ve chosen a translator, you should take a critical look at your strings.xml file before sending it off for translation. Check for things like spelling mistakes and typos, and make sure your strings.xml is formatted in a way that makes it easy to read, bearing in mind that your translator may not be an Android developer themselves.

You should also provide as much context as possible, so make sure you add a comment to each string explaining what this string is for, where and when it’ll appear in your app, and any restrictions the translator needs to be aware of. For example, if a string needs to remain under 10 characters long in order to fit its allocated space in your layout, then this is something the translator needs to be aware of!

Once the translator has returned your translated strings.xml file(s), you’ll need to create a directory for each alternative file, which means you’ll need to work out what configuration qualifiers to use.

A locale configuration qualifier consists of an ISO code, which is essentially a language code, and an optional country or regional code, which is preceded by a lowercase r. For example, if you wanted to provide French (fr) text for people located in Canada (can), then you’d create a res/values-fr-rcan directory.  

If you do provide localized text, then bear in mind that some strings may expand or shrink significantly during translation, so you’ll need to test that your layouts can accommodate all versions of your project’s strings.

The easiest way to test your localized resources is to install your app on an AVD and then emulate the different locations and language settings. At this point, the system will load the localized versions of your resources and display them in your app.

You can change the locale settings in a running AVD by issuing the following Android Debug Bridge (adb) commands:

Followed by:

Note, you’ll need to replace the fr-CAN with whatever configuration qualifier you want to test against.

If you’ve designed your app with best practices in mind, then your layouts should be flexible enough to display the majority of your localized strings. However, if the length of your strings varies dramatically then you may need to provide alternate layouts that are optimized for different locales.

While your project’s strings.xml file is generally the main resource you’ll need to localize, you should also consider whether there are other resources you might need to translate such as drawables that contain text, video or audio that contains dialogue, or any resources that might be inappropriate for the locale you’re targeting.

Once you’re confident that you’ve provided all the necessary localized resources and you’ve performed your own round of testing, you should consider arranging a beta test with native speakers in each of your target locales. Native speakers can often pick up on mistakes that even a translator may overlook, and may be able to give you some advice about how you can make your app more appealing to this particular section of your audience. You can arrange this kind of targeted beta testing via the Google Play Developer Console.

When you’re finally ready to launch your app, make sure you take the time to create localized versions of your app’s Google Play page, as this will instantly make your app more attractive to international users who are browsing the Google Play store. You should also try and provide screenshots that clearly show the localized text inside your app, so users aren’t left wondering whether you’ve just translated your app’s Google Play page, and not the actual text inside your app.

And the hard work doesn’t end once you’ve launched your app! Once you've attracted your international audience, you’ll need to hang onto them by providing ongoing support across multiple languages—even if that means resorting to machine translators like Google Translate. At the very least, you should keep an eye on your Google Play reviews, to see whether users in certain locales are reporting similar issues, which may indicate an issue with one or more of your app’s localized resources.

4. Don’t Forget About Accessibility!

As an app developer, you’ll want to make sure that everyone can enjoy using your app, so it’s important to consider how accessible your app is to people who may be experiencing it without sound, colour or other visuals, or anyone who interacts with their Android device via an accessibility tool such as a screen reader.

Android is designed with accessibility in mind, so it has a number of built-in accessibility features that you can utilize without having to make any fundamental changes to your application’s code.

Let's look at a number of minor tweaks you can make to your project, which will have a huge impact on your app’s accessibility:

Consider Supplying Additional Content Descriptions

Accessibility services such as TalkBack read onscreen text out loud, making them an important tool for helping users with vision-related problems interact with their Android devices.  

When designing your Android app, you should consider how easy it'll be for users to navigate your app using its onscreen text alone. If you do need to provide some additional context, then you can add a content description to any of your app’s UI components, which will then be read aloud by services such as TalkBack. To add a content description, open your project’s layout resource file and add an android:contentDescription attribute to the UI component in question, followed by the description you want to use.

Support Focus Navigation

Users with limited vision or limited manual dexterity may find it easier to interact with their device using a directional controller, such as a trackpad, D-pad or keyboard, or software that emulates a directional controller. To make sure your app supports this kind of focus navigation, you’ll need to add the android:focusable="true” attribute to each of your app’s navigational components.

When users navigate your app using directional controls, focus gets passed from one UI element to another in an order that's determined automatically using an algorithm. However, you can override these defaults and specify which UI component should gain focus when the user moves in a particular direction, by adding the following XML attributes to any of your UI components: android:nextFocusUp, android:nextFocusDown, android:nextFocusLeft, and android:nextFocusRight. For example:

Make Your Text Resizable

Users with vision impairments may choose to increase the size of the font that appears across their device. To make sure any font changes are reflected in your app, define your text in scale pixels—and don’t forget to test what impact Android’s various font sizes will have on your app’s UI, just in case you need to make some adjustments to your layout.

Use Recommended Touch Target Sizes

To help people with manual dexterity challenges navigate your app, it’s recommended that you set all touch targets to 48 x 48 dpi or higher, and make sure the space between these targets is at least 8 dpi.

Consider Disabling Timed-Out Controls

Some UI components may disappear automatically after a certain period of time has elapsed—for example, video playback controls often disappear once a video has been playing for a few moments. 

The problem is that accessibility services such as Talkback don’t read controls until the user has focused on them, so if a timed-out control vanishes before the user has a chance to focus on it, then they won’t be aware that these controls even exist. For this reason, you should consider upgrading timed-out controls to permanent controls whenever accessibility services are enabled.

5. Do Put Your App’s Performance to the Test

Just because your app doesn’t crash or throw any errors during testing, doesn’t automatically mean it’s performing well, as some performance problems can be insidious and difficult to spot during regular testing. No-one enjoys using an app that takes forever to load, lags whenever you try and interact with it, and gobbles up the available memory, so you should always put your app through a number of performance-based tests before releasing it into the wild.

The Android SDK comes with a wide range of tools that you can use to specifically test your app’s performance. In this section, we’re going to look at a few of the ones you’ll definitely want to use; however, there are many more that are worth investigating (you’ll find more information in the official Android docs).

Note that all of these tools can only communicate with a running app, so you’ll need to make sure the app you want to test is installed on either an AVD or a physical device that’s connected to your development machine.

Before we get started, it’s worth noting that if you do identify a problem with your app’s performance, it’s recommended that you time your code before trying to fix this problem. You can then time your code again after you believe you’ve fixed the problem, and you'll be able to see exactly what impact your changes have had on your app’s performance.

You can time your code using TraceView, which you access by selecting the Android Device Monitor’s DDMS tab, then selecting the device and the process you want to profile, and giving the Start Method Profiling icon a click (where the cursor is positioned in the following screenshot).

In the Android Device Monitor select the DDMS tab followed by Start Method Profiling

At this point you can select either Trace based profiling (traces the entry and exit of every method) or Sample based profiling (collects the call stacks at a frequency specified by you). Once you’ve made your selection, spend some time interacting with your app. When you’re ready to see the results, you can load the trace file into the viewer by giving the Stop Method Profiling icon a click. The trace file displays each thread’s execution as a separate row, so you can see exactly how long each part of your project takes to run.

Identifying Overdraw

When the system draws your app’s UI, it starts with the highest-level container and then works its way through your view hierarchy, potentially drawing views on top of one another in a process known as overdraw. While a certain amount of overdraw is inevitable, you can reduce the time it takes your app to render by identifying and removing any instances of excessive or unnecessary overdraw.

If you have a device running Android 4.2 or higher, you can check the amount of overdraw present in any app installed on that device, by selecting Settings > Developer Options > Debug GPU Overdraw > Select overdraw areas. The system will then add a coloured overlay to each area of the screen, indicating the number of times each pixel has been drawn:

  • No Color. This pixel was painted once.

  • Blue. An overdraw of 1x. These pixels were painted twice.

  • Green. An overdraw of 2x.

  • Light red. An overdraw of 3x.

  • Dark red. An overdraw of 4x or more.

Most apps include some level of overdraw, but if you spot large areas of overdraw in your app then you should look at whether there’s any way of reducing the number of times each pixel is being redrawn, and one of the effective methods is removing unnecessary views.

The Android Device Monitor’s Hierarchy Viewer provides a high-level overview of your app’s entire view hierarchy, which can help you identify views that aren’t contributing anything to the final rendered image the user sees onscreen. 

To launch Hierarchy Viewer, click the Android Device Monitor’s Hierarchy View button, and then select the device and the Activity you want to inspect, followed by the blue Load the view hierarchy into the tree view icon.  

In the Android Device Monitor select the Hierarchy View button followed by Load the view hierarchy into the tree view

You may also want to export the Hierarchy Viewer output as a Photoshop document. This is a particularly effective technique for identifying views that aren’t contributing anything to the final UI, as each view is displayed as a separate Photoshop layer, meaning you can hide and reveal each layer and see exactly what impact this will have on the final image the user sees onscreen.

To create a PSD document, simply give the Capture the window layers as a Photoshop document icon a click.

Spotting Memory Leaks

Garbage collection (GC) is a normal system behaviour that’s important for ensuring your app and the device as a whole continue to run smoothly.

However, if your app isn’t managing memory properly—maybe it’s leaking memory or allocating a large number of objects in a short period of time—then this can trigger more frequent GC events that also run for longer. You can take a look at exactly what GC events are occurring in your app in the main Android Studio window; open the Android Monitor tab towards the bottom of the window, followed by the Monitors tab. The Memory Monitor tool will then start recording your app’s memory usage automatically.

Select the Android Monitor tab towards the bottom of the main Android Studio window followed by the Monitors tab

If you keep interacting with your app, then eventually you’ll see a sudden drop in the amount of allocated memory, indicating that a GC event has occurred. Repeat this process, making sure you explore different areas of your app, and see what impact this has on the GC events. If you do spot any strange GC behaviour, then you’ll want to investigate further as this may indicate a problem with the way your app is using memory.

There are a couple of tools you can use to gather more information about your app’s memory usage. Firstly, you can use the Android Device Monitor’s Heap tab to see how much heap memory each process is using, which will flag up any processes that are gobbling up available memory.

To use the Heap tool, select the Android Device Monitor’s DDMS tab, followed by the process you want to examine, and then click the Update heap button. The Heap tab won’t display any data until after a GC event has occurred, but if you’re feeling impatient you can always trigger a GC event by giving the Cause GC button a click.

Open the Heap tool by selecting DDMS  Heap

Another tool that can help you gather information about your app’s memory usage is Allocation Tracker, which lets you see exactly what objects your app is allocating to memory. To use Allocation Tracker, select the Android Device Monitor’s DDMS tab, followed by Allocation Tracker and the process you want to examine.

Click the Start Tracking button and spend some time interacting with your app, especially any parts that you suspect may be causing your app’s memory management problems. To see all the data Allocation Tracker has gathered during the sampling period, select the Start Tracking button, followed by the Get Allocations button.

6. Do Use Analytics Tools

Understanding your audience is a crucial part of creating a successful app.

Having access to data about who your audience is and how they’re using your app means you can make more informed decisions about how to build on your app’s success—and improve on the areas where your app isn’t doing quite so well.

Gathering this kind of user data is particularly useful for enabling you to identify trends across different sections of your audience. For example, you might decide that a certain segment of your audience is particularly valuable—maybe they’re racking up the largest percentage of in-app purchases, or they’re investing an above-average amount of time in your app. Armed with this information, you can take extra steps to support these users, making sure that your most valuable users remain engaged with your app.  

At the other end of the scale, you might discover an area where your app is struggling. For example, maybe users who are running a particular version of Android have considerably lower engagement rates, or are more likely to uninstall your app. In this scenario, you might want to test your app on this particular version of Android to see whether there’s a bug or any other error that might be ruining the user experience for this section of your user base.

Basically, the more data you can gather about your audience and their behaviour, the greater your chances of keeping your existing user happy, attracting new users, and generally delivering an all-round great experience for everyone who comes into contact with your app.

In this section I’m going to look at two services that can put a wealth of information at your fingertips: Firebase Analytics and the Google Play Developer Console.

Firebase

You can use Firebase Analytics to gather data about your users, such as their age, gender and location, plus information on over 500 in-app events. You can even define your own custom events, if required.

To add Firebase Analytics to your project, you’ll need Google Play services 10.0.1 or higher and the Google Repository version 26 or higher, so open the SDK Manager and make sure these components are up to date. You’ll also need to be running Android Studio 1.5 or higher, and be signed up for a free Firebase account.

If you're running Android 2.2 or later, then you can connect your app to Firebase Analytics using the Firebase Assistant. Open Android Studio, launch the project in question, and:

  • Select Tools > Firebase from the Android Studio toolbar.

Launch the Firebase Assistant by selecting Tools from the Android Studio toolbar followed by Firebase

  • Click to expand the Analytics section, then select the Log an Analytics event link.

  • Click the Connect to Firebase button.

  • In the dialogue that appears, opt to create a new Firebase project.

  • Click the Connect to Firebase button.

  • After a few moments, you should see a Connected message.

  • Give the Add analytics to your app button a click.

  • In the subsequent dialogue, click Accept changes.

And that’s it! You can now view all your Firebase Analytics data by logging into the Firebase Console, selecting the project you want to examine, and then selecting Analytics. This data will update periodically throughout the day.

The Developer Console

You can also gain a valuable insight into your app’s performance and user behaviour via the Developer Console.

To view this data, log into your Developer Console account, select the app you want to examine, and then select Dashboard from the left-hand menu.

The Developer Console contains lots of useful information, so it’s well worth taking the time to explore its various sections in detail. However, there are a few areas that may be of particular interest:

  • User Acquisition Performance. This section contains a breakdown of how users are finding your app’s Google Play listing, for example the percentage of users who landed on your page via a UTM-tagged link, an AdWords ad, or the number of people who simply found your app by browsing the Google Play store.

  • Finance. If you’ve implemented a monetization strategy, such as in-app products or a subscription option, then you can use the Developer Console to review your app’s financial performance. The Finance section contains information such as the average amount each paying user is investing in your app, how much revenue is being generated by each product you offer through your app, and how much each section of your user base is spending, based on factors such as their geographical location, age, and the version of Android they’re using.

  • Crashes & ANRs. This section contains all the data users have submitted about application crashes and application not responding (ANR) errors, giving you a chance to identify and fix any issues that may be occurring in your app before users start leaving you negative reviews on Google Play. Note that crashes that aren’t reported by your users won’t appear in the Developer Console.

You may also want to consider downloading the Google Play Developer Console app, which lets you review all this information on the go.

Conclusion

In this article, we looked at six things you should and shouldn’t do when you’re developing an Android app. What are your golden rules for designing a great user experience? Leave a comment below and let us know.

In the meantime, check out some of our other courses and tutorials on Android programming!

2017-02-02T12:57:44.000Z2017-02-02T12:57:44.000ZJessica Thornsby
Viewing all 1836 articles
Browse latest View live