Quantcast
Channel: Envato Tuts+ Code - Mobile Development
Viewing all 1836 articles
Browse latest View live

How to Use the Google Cloud Vision API in Android Apps

$
0
0

Computer vision is considered an AI-complete problem. In other words, solving it would be equivalent to creating a program that's as smart as humans. Needless to say, such a program is yet to be created. However, if you've ever used apps like Google Goggles or Google Photos—or watched the segment on Google Lens in the keynote of Google I/O 2017—you probably realize that computer vision has become very powerful.

Through a REST-based API called Cloud Vision API, Google shares its revolutionary vision-related technologies with all developers. By using the API, you can effortlessly add impressive features such as face detection, emotion detection, and optical character recognition to your Android apps. In this tutorial, I'll show you how.

Prerequisites

To be able to follow this tutorial, you must have:

If some of the above requirements sound unfamiliar to you, I suggest you read the following introductory tutorial about the Google Cloud Machine Learning platform:

1. Enabling the Cloud Vision API

You can use the Cloud Vision API in your Android app only after you've enabled it in the Google Cloud console and acquired a valid API key. So start by logging in to the console and navigating to API Manager > Library > Vision API. In the page that opens, simply press the Enable button.

Enable Cloud Vision API

If you've already generated an API key for your Cloud console project, you can skip to the next step because you will be able to reuse it with the Cloud Vision API. Otherwise, open the Credentials tab and select Create Credentials > API key.

Create API key

In the dialog that pops up, you will see your API key.

2. Adding Dependencies

Like most other APIs offered by Google, the Cloud Vision API can be accessed using the Google API Client library. To use the library in your Android Studio project, add the following compile dependencies in the app module's build.gradle file:

Furthermore, to simplify file I/O operations, I suggest you also add a compile dependency for the Apache Commons IO library.

Because the Google API Client can work only if your app has the INTERNET permission, make sure the following line is present in your project's manifest file:

3. Configuring the API Client

You must configure the Google API client before you use it to interact with the Cloud Vision API. Doing so primarily involves specifying the API key, the HTTP transport, and the JSON factory it should use. As you might expect, the HTTP transport will be responsible for communicating with Google's servers, and the JSON factory will, among other things, be responsible for converting the JSON-based results the API generates into Java objects. 

For modern Android apps, Google recommends that you use the NetHttpTransport class as the HTTP transport and the AndroidJsonFactory class as the JSON factory.

The Vision class represents the Google API Client for Cloud Vision. Although it is possible to create an instance of the class using its constructor, doing so using the Vision.Builder class instead is easier and more flexible.

While using the Vision.Builder class, you must remember to call the setVisionRequestInitializer() method to specify your API key. The following code shows you how:

Once the Vision.Builder instance is ready, you can call its build() method to generate a new Vision instance you can use throughout your app.

At this point, you have everything you need to start using the Cloud Vision API.

4. Detecting and Analyzing Faces

Detecting faces in photographs is a very common requirement in computer vision-related applications. With the Cloud Vision API, you can create a highly accurate face detector that can also identify emotions, lighting conditions, and face landmarks.

For the sake of demonstration, we'll be running face detection on the following photo, which features the crew of Apollo 9:

Sample photo for face detection

I suggest you download a high-resolution version of the photo from Wikimedia Commons and place it in your project's res/raw folder.

Step 1: Encode the Photo

The Cloud Vision API expects its input image to be encoded as a Base64 string that's placed inside an Image object. Before you generate such an object, however, you must convert the photo you downloaded, which is currently a raw image resource, into a byte array. You can quickly do so by opening its input stream using the openRawResource() method of the Resources class and passing it to the toByteArray() method of the IOUtils class.

Because file I/O operations should not be run on the UI thread, make sure you spawn a new thread before opening the input stream. The following code shows you how:

You can now create an Image object by calling its default constructor. To add the byte array to it as a Base64 string, all you need to do is pass the array to its encodeContent() method.

Step 2: Make a Request

Because the Cloud Vision API offers several different features, you must explicitly specify the feature you are interested in while making a request to it. To do so, you must create a Feature object and call its setType() method. The following code shows you how to create a Feature object for face detection only:

Using the Image and the Feature objects, you can now compose an AnnotateImageRequest instance.

Note that an AnnotateImageRequest object must always belong to a BatchAnnotateImagesRequest object because the Cloud Vision API is designed to process multiple images at once. To initialize a BatchAnnotateImagesRequest instance containing a single AnnotateImageRequest object, you can use the Arrays.asList() utility method.

To actually make the face detection request, you must call the execute() method of an Annotate object that's initialized using the BatchAnnotateImagesRequest object you just created. To generate such an object, you must call the annotate() method offered by the Google API Client for Cloud Vision. Here's how:

Step 3: Use the Response

Once the request has been processed, you get a BatchAnnotateImagesResponse object containing the response of the API. For a face detection request, the response contains a FaceAnnotation object for each face the API has detected. You can get a list of all FaceAnnotation objects using the getFaceAnnotations() method.

A FaceAnnotation object contains a lot of useful information about a face, such as its location, its angle, and the emotion it is expressing. As of version 1, the API can only detect the following emotions: joy, sorrow, anger, and surprise.

To keep this tutorial short, let us now simply display the following information in a Toast:

  • The count of the faces
  • The likelihood that they are expressing joy

You can, of course, get the count of the faces by calling the size() method of the List containing the FaceAnnotation objects. To get the likelihood of a face expressing joy, you can call the intuitively named getJoyLikelihood() method of the associated FaceAnnotation object. 

Note that because a simple Toast can only display a single string, you'll have to concatenate all the above details. Additionally, a Toast can only be displayed from the UI thread, so make sure you call it after calling the runOnUiThread() method. The following code shows you how:

You can now go ahead and run the app to see the following result:

Face detection results

5. Reading Text

The process of extracting strings from photos of text is called optical character recognition, or OCR for short. The Cloud Vision API allows you to easily create an optical character reader that can handle photos of both printed and handwritten text. What's more, the reader you create will have no trouble reading angled text or text that's overlaid on a colorful picture.

The API offers two different features for OCR:

  • TEXT_DETECTION, for reading small amounts of text, such as that present on signboards or book covers
  • and DOCUMENT_TEXT_DETECTION, for reading large amounts of text, such as that present on the pages of a novel

The steps you need to follow in order to make an OCR request are identical to the steps you followed to make a face detection request, except for how you initialize the Feature object. For OCR, you must set its type to either TEXT_DETECTION or DOCUMENT_TEXT_DETECTION. For now, let's go with the former.

You will, of course, also have to place a photo containing text inside your project's res/raw folder. If you don't have such a photo, you can use this one, which shows a street sign:

Sample photo for text detection

You can download a high-resolution version of the above photo from Wikimedia Commons.

In order to start processing the results of an OCR operation, after you obtain the BatchAnnotateImagesResponse object, you must call the getFullTextAnnotation() method to get a TextAnnotation object containing all the extracted text.

You can then call the getText() method of the TextAnnotation object to actually get a reference to a string containing the extracted text.

The following code shows you how to display the extracted text using a Toast:

If you run your app now, you should see something like this:

Text detection results

Conclusion

In this tutorial you learned how to use the Cloud Vision API to add face detection, emotion detection, and optical character recognition capabilities to your Android apps. I'm sure you'll agree with me when I say that these new capabilities will allow your apps to offer more intuitive and smarter user interfaces.

It's worth mentioning that there's one important feature that's missing in the Cloud Vision API: face recognition. In its current form, the API can only detect faces, not identify them.

To learn more about the API, you can refer to the official documentation.

And meanwhile, check out some of our other tutorials on adding computer learning to your Android apps!

2017-06-20T19:55:00.000Z2017-06-20T19:55:00.000ZAshraff Hathibelagal

How to Use the Google Cloud Vision API in Android Apps

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-29009

Computer vision is considered an AI-complete problem. In other words, solving it would be equivalent to creating a program that's as smart as humans. Needless to say, such a program is yet to be created. However, if you've ever used apps like Google Goggles or Google Photos—or watched the segment on Google Lens in the keynote of Google I/O 2017—you probably realize that computer vision has become very powerful.

Through a REST-based API called Cloud Vision API, Google shares its revolutionary vision-related technologies with all developers. By using the API, you can effortlessly add impressive features such as face detection, emotion detection, and optical character recognition to your Android apps. In this tutorial, I'll show you how.

Prerequisites

To be able to follow this tutorial, you must have:

If some of the above requirements sound unfamiliar to you, I suggest you read the following introductory tutorial about the Google Cloud Machine Learning platform:

1. Enabling the Cloud Vision API

You can use the Cloud Vision API in your Android app only after you've enabled it in the Google Cloud console and acquired a valid API key. So start by logging in to the console and navigating to API Manager > Library > Vision API. In the page that opens, simply press the Enable button.

Enable Cloud Vision API

If you've already generated an API key for your Cloud console project, you can skip to the next step because you will be able to reuse it with the Cloud Vision API. Otherwise, open the Credentials tab and select Create Credentials > API key.

Create API key

In the dialog that pops up, you will see your API key.

2. Adding Dependencies

Like most other APIs offered by Google, the Cloud Vision API can be accessed using the Google API Client library. To use the library in your Android Studio project, add the following compile dependencies in the app module's build.gradle file:

Furthermore, to simplify file I/O operations, I suggest you also add a compile dependency for the Apache Commons IO library.

Because the Google API Client can work only if your app has the INTERNET permission, make sure the following line is present in your project's manifest file:

3. Configuring the API Client

You must configure the Google API client before you use it to interact with the Cloud Vision API. Doing so primarily involves specifying the API key, the HTTP transport, and the JSON factory it should use. As you might expect, the HTTP transport will be responsible for communicating with Google's servers, and the JSON factory will, among other things, be responsible for converting the JSON-based results the API generates into Java objects. 

For modern Android apps, Google recommends that you use the NetHttpTransport class as the HTTP transport and the AndroidJsonFactory class as the JSON factory.

The Vision class represents the Google API Client for Cloud Vision. Although it is possible to create an instance of the class using its constructor, doing so using the Vision.Builder class instead is easier and more flexible.

While using the Vision.Builder class, you must remember to call the setVisionRequestInitializer() method to specify your API key. The following code shows you how:

Once the Vision.Builder instance is ready, you can call its build() method to generate a new Vision instance you can use throughout your app.

At this point, you have everything you need to start using the Cloud Vision API.

4. Detecting and Analyzing Faces

Detecting faces in photographs is a very common requirement in computer vision-related applications. With the Cloud Vision API, you can create a highly accurate face detector that can also identify emotions, lighting conditions, and face landmarks.

For the sake of demonstration, we'll be running face detection on the following photo, which features the crew of Apollo 9:

Sample photo for face detection

I suggest you download a high-resolution version of the photo from Wikimedia Commons and place it in your project's res/raw folder.

Step 1: Encode the Photo

The Cloud Vision API expects its input image to be encoded as a Base64 string that's placed inside an Image object. Before you generate such an object, however, you must convert the photo you downloaded, which is currently a raw image resource, into a byte array. You can quickly do so by opening its input stream using the openRawResource() method of the Resources class and passing it to the toByteArray() method of the IOUtils class.

Because file I/O operations should not be run on the UI thread, make sure you spawn a new thread before opening the input stream. The following code shows you how:

You can now create an Image object by calling its default constructor. To add the byte array to it as a Base64 string, all you need to do is pass the array to its encodeContent() method.

Step 2: Make a Request

Because the Cloud Vision API offers several different features, you must explicitly specify the feature you are interested in while making a request to it. To do so, you must create a Feature object and call its setType() method. The following code shows you how to create a Feature object for face detection only:

Using the Image and the Feature objects, you can now compose an AnnotateImageRequest instance.

Note that an AnnotateImageRequest object must always belong to a BatchAnnotateImagesRequest object because the Cloud Vision API is designed to process multiple images at once. To initialize a BatchAnnotateImagesRequest instance containing a single AnnotateImageRequest object, you can use the Arrays.asList() utility method.

To actually make the face detection request, you must call the execute() method of an Annotate object that's initialized using the BatchAnnotateImagesRequest object you just created. To generate such an object, you must call the annotate() method offered by the Google API Client for Cloud Vision. Here's how:

Step 3: Use the Response

Once the request has been processed, you get a BatchAnnotateImagesResponse object containing the response of the API. For a face detection request, the response contains a FaceAnnotation object for each face the API has detected. You can get a list of all FaceAnnotation objects using the getFaceAnnotations() method.

A FaceAnnotation object contains a lot of useful information about a face, such as its location, its angle, and the emotion it is expressing. As of version 1, the API can only detect the following emotions: joy, sorrow, anger, and surprise.

To keep this tutorial short, let us now simply display the following information in a Toast:

  • The count of the faces
  • The likelihood that they are expressing joy

You can, of course, get the count of the faces by calling the size() method of the List containing the FaceAnnotation objects. To get the likelihood of a face expressing joy, you can call the intuitively named getJoyLikelihood() method of the associated FaceAnnotation object. 

Note that because a simple Toast can only display a single string, you'll have to concatenate all the above details. Additionally, a Toast can only be displayed from the UI thread, so make sure you call it after calling the runOnUiThread() method. The following code shows you how:

You can now go ahead and run the app to see the following result:

Face detection results

5. Reading Text

The process of extracting strings from photos of text is called optical character recognition, or OCR for short. The Cloud Vision API allows you to easily create an optical character reader that can handle photos of both printed and handwritten text. What's more, the reader you create will have no trouble reading angled text or text that's overlaid on a colorful picture.

The API offers two different features for OCR:

  • TEXT_DETECTION, for reading small amounts of text, such as that present on signboards or book covers
  • and DOCUMENT_TEXT_DETECTION, for reading large amounts of text, such as that present on the pages of a novel

The steps you need to follow in order to make an OCR request are identical to the steps you followed to make a face detection request, except for how you initialize the Feature object. For OCR, you must set its type to either TEXT_DETECTION or DOCUMENT_TEXT_DETECTION. For now, let's go with the former.

You will, of course, also have to place a photo containing text inside your project's res/raw folder. If you don't have such a photo, you can use this one, which shows a street sign:

Sample photo for text detection

You can download a high-resolution version of the above photo from Wikimedia Commons.

In order to start processing the results of an OCR operation, after you obtain the BatchAnnotateImagesResponse object, you must call the getFullTextAnnotation() method to get a TextAnnotation object containing all the extracted text.

You can then call the getText() method of the TextAnnotation object to actually get a reference to a string containing the extracted text.

The following code shows you how to display the extracted text using a Toast:

If you run your app now, you should see something like this:

Text detection results

Conclusion

In this tutorial you learned how to use the Cloud Vision API to add face detection, emotion detection, and optical character recognition capabilities to your Android apps. I'm sure you'll agree with me when I say that these new capabilities will allow your apps to offer more intuitive and smarter user interfaces.

It's worth mentioning that there's one important feature that's missing in the Cloud Vision API: face recognition. In its current form, the API can only detect faces, not identify them.

To learn more about the API, you can refer to the official documentation.

And meanwhile, check out some of our other tutorials on adding computer learning to your Android apps!

2017-06-20T19:55:00.000Z2017-06-20T19:55:00.000ZAshraff Hathibelagal

Android Design Patterns: The Observer Pattern

$
0
0

What Is the Observer Pattern?

The Observer Pattern is a software design pattern that establishes a one-to-many dependency between objects. Anytime the state of one of the objects (the "subject" or "observable") changes, all of the other objects ("observers") that depend on it are notified.

Let's use the example of users that have subscribed to receive offers from Envato Market via email. The users in this case are observers. Anytime there is an offer from Envato Market, they get notified about it via email. Each user can then either buy into the offer or decide that they might not be really interested in it at that moment. A user (an observer) can also subscribe to receive offers from another e-commerce marketplace if they want and might later completely unsubscribe from receiving offers from any of them. 

This pattern is very similar to the Publish-Subscribe pattern. The subject or observable publishes out a notification to the dependent observers without even knowing how many observers have subscribed to it, or who they are—the observable only knows that they should implement an interface (we'll get to that shortly), without worrying about what action the observers might perform.

Benefits of the Observer Pattern

  • The subject knows little about its observers. The only thing it knows is that the observers implement or agree to a certain contract or interface. 
  • Subjects can be reused without involving their observers, and the same goes for observers too.
  • No modification is done to the subject to accommodate a new observer. The new observer just needs to implement an interface that the subject is aware of and then register to the subject.  
  • An observer can be registered to more than one subject it's registered to.

All these benefits give you loose coupling between modules in your code, which enables you to build a flexible design for your application. In the rest of this post, we'll look at how to create our own Observer pattern implementation, and we'll also use the built-in Java Observer/Observable API as well as looking into third-party libraries that can offer such functionality. 

Building Our Own Observer Pattern

1. Create the Subject Interface

We start by defining an interface that subjects (observables) will implement.

In the code above, we created a Java Interface with three methods. The first method registerObserver(), as it says, will register an observer of type RepositoryObserver (we'll create that interface shortly) to the subject. removeObserver() will be called to remove an observer that wants to stop getting notifications from the subject, and finally, notifyObserver() will send a broadcast to all observers whenever there is a change. Now, let's create a concrete subject class that will implement the subject interface we have created:

The class above implements the Subject interface. We have an ArrayList that holds the observers and then creates it in the private constructor. An observer registers by being added to the ArrayList and likewise, unregisters by being removed from the  ArrayList

Note that we are simulating a network request to retrieve the new data. Once the setUserData() method is called and given the new value for the full name and age, we call the notifyObservers() method which, as it says, notifies or sends a broadcast to all registered observers about the new data change. The new values for the full name and age are also passed along. This subject can have multiple observers but, in this tutorial, we'll create just one observer. But first, let's create the observer interface. 

2. Create the Observer Interface

In the code above, we created the observer interface which concrete observers should implement. This allows our code to be more flexible because we are coding to an interface instead of a concrete implementation. A concrete Subject class does not need to be aware of the many concrete observers it may have; all it knows about them is that they implement the RepositoryObserver interface. 

Let's now create a concrete class that implements this interface.

The first thing to notice in the code above is that UserProfileActivity implements the RepositoryObserver interface—so it must implement the method onUserDataChanged(). In the onCreate() method of the Activity, we got an instance of the UserDataRepository which we then initialized and finally registered this observer to. 

In the onDestroy() method, we want to stop getting notifications, so we unregister from receiving notifications. 

In the onUserDataChanged() method, we want to update the TextView widgets—mTextViewUserFullName and mTextViewUserAge—with the new set of data values.  

Right now we just have one observer class, but it's possible and easy for us to create other classes that want to be observers of the UserDataRepository class. For example, we could easily have a SettingsActivity that wants to also be notified about the user data changes by becoming an observer. 

Push and Pull Models

In the example above, we are using the push model of the observer pattern. In this model, the subject notifies the observers about the change by passing along the data that changed. But in the pull model, the subject will still notify the observers, but it does not actually pass the data that changed. The observers then pull the data they need once they receive the notification. 

Utilising Java's Built-In Observer API

So far, we have created our own Observer pattern implementation, but Java has built-in Observer / Observable support in its API. In this section, we are going to use this. This API simplifies some of the implementation, as you'll see. 

1. Create the Observable

Our UserDataRepository—which is our subject or observable—will now extend the java.util.Observable superclass to become an Observable. This is a class that wants to be observed by one or more observers. 

Now that we have refactored our UserDataRepository class to use the Java Observable API, let's see what has changed compared to the previous version. The first thing to notice is that we are extending a super class (this means that this class can't extend any other class) and not implementing an interface as we did in the previous section. 

We are no longer holding an ArrayList of observers; this is handled in the super class. Similarly, we don't have to worry about registration, removal, or notification of observers—java.util.Observable is handling all of those for us. 

Another difference is that in this class we are employing a pull style. We alert the observers that a change has happened with notifyObservers(), but the observers will need to pull the data using the field getters we have defined in this class. If you want to use the push style instead, then you can use the method notifyObservers(Object arg) and pass the changed data to the observers in the object argument. 

The setChanged() method of the super class sets a flag to true, indicating that the data has changed. Then you can call the notifyObservers() method. Be aware that if you don't call setChanged() before calling notifyObsevers(), the observers won't be notified. You can check the value of this flag by using the method hasChanged() and clear it back to false with clearChanged(). Now that we have our observable class created, let's see how to set up an observer also.  

2. Create the Observer

Our UserDataRepository observable class needs a corresponding Observer to be useful, so let's refactor our UserProfileActivity to implement the java.util.Observer interface. 

In the onCreate() method, we add this class as an observer to the UserDataRepository observable by using the addObserver() method in the java.util.Observable super class.  

In the update() method which the observer must implement, we check if the Observable we receive as a parameter is an instance of our UserDataRepository (note that an observer can subscribe to different observables), and then we cast it to that instance and retrieve the values we want using the field getters. Then we use those values to update the view widgets. 

When the activity is destroyed, we don't need to get any updates from the observable, so we'll just remove the activity from the observer list by calling the method deleteObserver()

Libraries to Implement an Observer Pattern

If you don't want to build your own Observer pattern implementation from scratch or use the Java Observer API, you can use some free and open-source libraries that are available for Android such as Greenrobot's EventBus. To learn more about it, check out my tutorial here on Envato Tuts+.

Or, you might like RxAndroid and RxJava. Learn more about them here:

Conclusion

In this tutorial, you learned about the Observer pattern in Java: what is it, the benefits of using it, how to implement your own, using the Java Observer API, and also some third-party libraries for implementing this pattern. 

In the meantime, check out some of our other courses and tutorials on the Java language and Android app development!

2017-06-21T13:19:53.000Z2017-06-21T13:19:53.000ZChike Mgbemena

Android Design Patterns: The Observer Pattern

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-28963

What Is the Observer Pattern?

The Observer Pattern is a software design pattern that establishes a one-to-many dependency between objects. Anytime the state of one of the objects (the "subject" or "observable") changes, all of the other objects ("observers") that depend on it are notified.

Let's use the example of users that have subscribed to receive offers from Envato Market via email. The users in this case are observers. Anytime there is an offer from Envato Market, they get notified about it via email. Each user can then either buy into the offer or decide that they might not be really interested in it at that moment. A user (an observer) can also subscribe to receive offers from another e-commerce marketplace if they want and might later completely unsubscribe from receiving offers from any of them. 

This pattern is very similar to the Publish-Subscribe pattern. The subject or observable publishes out a notification to the dependent observers without even knowing how many observers have subscribed to it, or who they are—the observable only knows that they should implement an interface (we'll get to that shortly), without worrying about what action the observers might perform.

Benefits of the Observer Pattern

  • The subject knows little about its observers. The only thing it knows is that the observers implement or agree to a certain contract or interface. 
  • Subjects can be reused without involving their observers, and the same goes for observers too.
  • No modification is done to the subject to accommodate a new observer. The new observer just needs to implement an interface that the subject is aware of and then register to the subject.  
  • An observer can be registered to more than one subject it's registered to.

All these benefits give you loose coupling between modules in your code, which enables you to build a flexible design for your application. In the rest of this post, we'll look at how to create our own Observer pattern implementation, and we'll also use the built-in Java Observer/Observable API as well as looking into third-party libraries that can offer such functionality. 

Building Our Own Observer Pattern

1. Create the Subject Interface

We start by defining an interface that subjects (observables) will implement.

In the code above, we created a Java Interface with three methods. The first method registerObserver(), as it says, will register an observer of type RepositoryObserver (we'll create that interface shortly) to the subject. removeObserver() will be called to remove an observer that wants to stop getting notifications from the subject, and finally, notifyObserver() will send a broadcast to all observers whenever there is a change. Now, let's create a concrete subject class that will implement the subject interface we have created:

The class above implements the Subject interface. We have an ArrayList that holds the observers and then creates it in the private constructor. An observer registers by being added to the ArrayList and likewise, unregisters by being removed from the  ArrayList

Note that we are simulating a network request to retrieve the new data. Once the setUserData() method is called and given the new value for the full name and age, we call the notifyObservers() method which, as it says, notifies or sends a broadcast to all registered observers about the new data change. The new values for the full name and age are also passed along. This subject can have multiple observers but, in this tutorial, we'll create just one observer. But first, let's create the observer interface. 

2. Create the Observer Interface

In the code above, we created the observer interface which concrete observers should implement. This allows our code to be more flexible because we are coding to an interface instead of a concrete implementation. A concrete Subject class does not need to be aware of the many concrete observers it may have; all it knows about them is that they implement the RepositoryObserver interface. 

Let's now create a concrete class that implements this interface.

The first thing to notice in the code above is that UserProfileActivity implements the RepositoryObserver interface—so it must implement the method onUserDataChanged(). In the onCreate() method of the Activity, we got an instance of the UserDataRepository which we then initialized and finally registered this observer to. 

In the onDestroy() method, we want to stop getting notifications, so we unregister from receiving notifications. 

In the onUserDataChanged() method, we want to update the TextView widgets—mTextViewUserFullName and mTextViewUserAge—with the new set of data values.  

Right now we just have one observer class, but it's possible and easy for us to create other classes that want to be observers of the UserDataRepository class. For example, we could easily have a SettingsActivity that wants to also be notified about the user data changes by becoming an observer. 

Push and Pull Models

In the example above, we are using the push model of the observer pattern. In this model, the subject notifies the observers about the change by passing along the data that changed. But in the pull model, the subject will still notify the observers, but it does not actually pass the data that changed. The observers then pull the data they need once they receive the notification. 

Utilising Java's Built-In Observer API

So far, we have created our own Observer pattern implementation, but Java has built-in Observer / Observable support in its API. In this section, we are going to use this. This API simplifies some of the implementation, as you'll see. 

1. Create the Observable

Our UserDataRepository—which is our subject or observable—will now extend the java.util.Observable superclass to become an Observable. This is a class that wants to be observed by one or more observers. 

Now that we have refactored our UserDataRepository class to use the Java Observable API, let's see what has changed compared to the previous version. The first thing to notice is that we are extending a super class (this means that this class can't extend any other class) and not implementing an interface as we did in the previous section. 

We are no longer holding an ArrayList of observers; this is handled in the super class. Similarly, we don't have to worry about registration, removal, or notification of observers—java.util.Observable is handling all of those for us. 

Another difference is that in this class we are employing a pull style. We alert the observers that a change has happened with notifyObservers(), but the observers will need to pull the data using the field getters we have defined in this class. If you want to use the push style instead, then you can use the method notifyObservers(Object arg) and pass the changed data to the observers in the object argument. 

The setChanged() method of the super class sets a flag to true, indicating that the data has changed. Then you can call the notifyObservers() method. Be aware that if you don't call setChanged() before calling notifyObsevers(), the observers won't be notified. You can check the value of this flag by using the method hasChanged() and clear it back to false with clearChanged(). Now that we have our observable class created, let's see how to set up an observer also.  

2. Create the Observer

Our UserDataRepository observable class needs a corresponding Observer to be useful, so let's refactor our UserProfileActivity to implement the java.util.Observer interface. 

In the onCreate() method, we add this class as an observer to the UserDataRepository observable by using the addObserver() method in the java.util.Observable super class.  

In the update() method which the observer must implement, we check if the Observable we receive as a parameter is an instance of our UserDataRepository (note that an observer can subscribe to different observables), and then we cast it to that instance and retrieve the values we want using the field getters. Then we use those values to update the view widgets. 

When the activity is destroyed, we don't need to get any updates from the observable, so we'll just remove the activity from the observer list by calling the method deleteObserver()

Libraries to Implement an Observer Pattern

If you don't want to build your own Observer pattern implementation from scratch or use the Java Observer API, you can use some free and open-source libraries that are available for Android such as Greenrobot's EventBus. To learn more about it, check out my tutorial here on Envato Tuts+.

Or, you might like RxAndroid and RxJava. Learn more about them here:

Conclusion

In this tutorial, you learned about the Observer pattern in Java: what is it, the benefits of using it, how to implement your own, using the Java Observer API, and also some third-party libraries for implementing this pattern. 

In the meantime, check out some of our other courses and tutorials on the Java language and Android app development!

2017-06-21T13:19:53.000Z2017-06-21T13:19:53.000ZChike Mgbemena

Securing iOS Data at Rest: Encryption

$
0
0

In this post, we'll look at advanced uses of encryption for user data in iOS apps. We'll start with a high-level look at AES encryption, and then go on to look at some examples of how to implement AES encryption in Swift.

In the last post, you learned how to store data using the keychain, which is good for small pieces of information such as keys, passwords, and certificates. 

If you are storing a large amount of custom data that you want to be available only after the user or device authenticates, then it's better to encrypt the data using an encryption framework. For example, you may have an app that can archive private chat messages saved by the user or private photos taken by the user, or which can store the user's financial details. In these cases, you would probably want to use encryption.

There are two common flows in applications for encrypting and decrypting data from iOS apps. Either the user is presented with a password screen, or the application is authenticated with a server which returns a key to decrypt the data. 

It's never a good idea to reinvent the wheel when it comes to encryption. Therefore, we are going to use the AES standard provided by the iOS Common Crypto library.

AES

AES is a standard that encrypts data given a key. The same key used to encrypt the data is used to decrypt the data. There are different key sizes, and AES256 (256 bits) is the preferred length to be used with sensitive data.

RNCryptor is a popular encryption wrapper for iOS that supports AES. RNCryptor is a great choice because it gets you up and running very quickly without having to worry about the underlying details. It is also open source so that security researchers can analyze and audit the code.  

On the other hand, if your app deals with very sensitive information and you think your application will be targeted and cracked, you may want to write your own solution. The reason for this is that when many apps use the same code, it can make the hacker's job easier, allowing them to write a cracking app that finds common patterns in the code and applies patches to them. 

Keep in mind, though, that writing your own solution only slows down an attacker and prevents automated attacks. The protection you are getting from your own implementation is that a hacker will need to spend time and dedication on cracking your app alone. 

Whether you choose a third-party solution or choose to roll your own, it's important to be knowledgeable about how encryption systems work. That way, you can decide if a particular framework you want to use is really secure. Therefore, the rest of this tutorial will focus on writing your own custom solution. With the knowledge you'll learn from this tutorial, you'll be able to tell if you're using a particular framework securely. 

We'll start with the creation of a secret key that will be used to encrypt your data.

Create a Key

A very common error in AES encryption is to use a user's password directly as the encryption key. What if the user decides to use a common or weak password? How do we force users to use a key that is random and strong enough (has enough entropy) for encryption and then have them remember it? 

The solution is key stretching. Key stretching derives a key from a password by hashing it many times over with a salt. The salt is just a sequence of random data, and it is a common mistake to omit this salt—the salt gives the key its vitally important entropy, and without the salt, the same key would be derived if the same password was used by someone else.

Without the salt, a dictionary of words could be used to deduce common keys, which could then be used to attack user data. This is called a "dictionary attack". Tables with common keys that correspond to unsalted passwords are used for this purpose. They're called "rainbow tables".

Another pitfall when creating a salt is to use a random number generating function that was not designed for security. An example is the rand() function in C, which can be accessed from Swift. This output can end up being very predictable! 

To create a secure salt,  we will use the function SecRandomCopyBytes to create cryptographically secure random bytes—which is to say, numbers that are difficult to predict. 

To use the code, you'll need to add the following into your bridging header:
#import <CommonCrypto/CommonCrypto.h>

Here is the start of the code that creates a salt. We will add to this code as we go along:

Now we are ready to do key stretching. Fortunately, we already have a function at our disposal to do the actual stretching: the Password-Based Key Derivation Function (PBKDF2). PBKDF2 performs a function many times over to derive the key; increasing the number of iterations expands the time it would take to operate on a set of keys during a brute force attack. It is recommended to use PBKDF2 to generate your key.

Server-Side Key

You may be wondering now about the cases where you don't want to require users to provide a password within your app. Perhaps they are already authenticating with a single sign-on scheme. In this case, have your server generate an AES 256-bit (32 byte) key using a secure generator. The key should be different for different users or devices. On authenticating with your server, you can pass the server a device or user ID over asecure connection, and it can send the corresponding key back. 

This scheme has a major difference. If the key is coming from the server, the entity that controls that server has the capacity to be able to read the encrypted data if the device or data were ever obtained. There is also the potential for the key to be leaked or exposed at a later time. 

On the other hand, if the key is derived from something only the user knows—the user's password—then only the user can decrypt that data. If you are protecting information such as private financial data, only the user should be able to unlock the data. If that information is known to the entity anyway, it may be acceptable to have the server unlock the content via a server-side key.

Modes and IVs

Now that we have a key, let's encrypt some data. There are different modes of encryption, but we'll be using the recommended mode: cipher block chaining (CBC). This operates on our data one block at a time. 

A common pitfall with CBC is the fact that each next unencrypted block of data is XOR’d with the previous encrypted block to make the encryption stronger. The problem here is that the first block is never as unique as all the others. If a message to be encrypted were to start off the same as another message to be encrypted, the beginning encrypted output would be the same, and that would give an attacker a clue to figuring out what the message might be. 

To get around this potential weakness, we'll start the data to be saved with what is called an initialization vector (IV): a block of random bytes. The IV will be XOR’d with the first block of user data, and since each block depends on all blocks processed up until that point, it will ensure that the entire message will be uniquely encrypted, even if it has the same data as another message. In other words, identical messages encrypted with the same key will not produce identical results. So while salts and IVs are considered public, they should not be sequential or reused. 

We will use the same secure SecRandomCopyBytes function to create the IV.

Putting It All Together

To complete our example, we'll use the CCCrypt function with either kCCEncrypt or kCCDecrypt. Because we are using a block cipher, if the message doesn’t fit nicely into a multiple of the block size, we will need to tell the function to automatically add padding to the end. 

As usual in encryption, it is best to follow established standards. In this case, the standard PKCS7 defines how to pad the data. We tell our encryption function to use this standard by supplying the KCCOptionPKCS7Padding option. Putting it all together, here is the full code to encrypt and decrypt a string.

And here is the decryption code:

Finally, here is a test to ensure that data is decrypted correctly after encryption:

In our example, we package all the necessary information and return it as a Dictionary so that all the pieces can later be used to successfully decrypt the data. You only need to store the IV and salt, either in the keychain or on your server.

Conclusion

This completes the three-part series on securing data at rest. We have seen how to properly store passwords, sensitive pieces of information, and large amounts of user data. These techniques are the baseline for protecting stored user information in your app.

It is a huge risk when a user's device is lost or stolen, especially with recent exploits to gain access to a locked device. While many system vulnerabilities are patched with a software update, the device itself is only as secure as the user's passcode and version of iOS. Therefore it is up to the developer of each app to provide strong protection of sensitive data being stored. 

All of the topics covered so far make use of Apple's frameworks. I will leave an idea with you to think about. What happens when Apple's encryption library gets attacked? 

When one commonly used security architecture is compromised, all of the apps that rely on it are also compromised. Any of iOS's dynamically linked libraries, especially on jailbroken devices, can be patched and swapped with malicious ones. 

However, a static library that is bundled with the binary of your app is protected from this kind of attack because if you try and patch it, you end up changing the app binary. This will break the code signature of the app, preventing it from being launched. If you imported and used, for example, OpenSSL for your encryption, your app would not be vulnerable to a widespread Apple API attack. You can compile OpenSSL yourself and statically link it into your app.

So there is always more to learn, and the future of app security on iOS is always evolving. The iOS security architecture even now supports cryptographic devices and smart cards! In closing, you now know the best practices for securing data at rest, so it's up to you to follow them!

In the meantime, check out some of our other content about iOS app development and app security.

2017-06-23T14:34:49.861Z2017-06-23T14:34:49.861ZCollin Stuart

Securing iOS Data at Rest: Encryption

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-28786

In this post, we'll look at advanced uses of encryption for user data in iOS apps. We'll start with a high-level look at AES encryption, and then go on to look at some examples of how to implement AES encryption in Swift.

In the last post, you learned how to store data using the keychain, which is good for small pieces of information such as keys, passwords, and certificates. 

If you are storing a large amount of custom data that you want to be available only after the user or device authenticates, then it's better to encrypt the data using an encryption framework. For example, you may have an app that can archive private chat messages saved by the user or private photos taken by the user, or which can store the user's financial details. In these cases, you would probably want to use encryption.

There are two common flows in applications for encrypting and decrypting data from iOS apps. Either the user is presented with a password screen, or the application is authenticated with a server which returns a key to decrypt the data. 

It's never a good idea to reinvent the wheel when it comes to encryption. Therefore, we are going to use the AES standard provided by the iOS Common Crypto library.

AES

AES is a standard that encrypts data given a key. The same key used to encrypt the data is used to decrypt the data. There are different key sizes, and AES256 (256 bits) is the preferred length to be used with sensitive data.

RNCryptor is a popular encryption wrapper for iOS that supports AES. RNCryptor is a great choice because it gets you up and running very quickly without having to worry about the underlying details. It is also open source so that security researchers can analyze and audit the code.  

On the other hand, if your app deals with very sensitive information and you think your application will be targeted and cracked, you may want to write your own solution. The reason for this is that when many apps use the same code, it can make the hacker's job easier, allowing them to write a cracking app that finds common patterns in the code and applies patches to them. 

Keep in mind, though, that writing your own solution only slows down an attacker and prevents automated attacks. The protection you are getting from your own implementation is that a hacker will need to spend time and dedication on cracking your app alone. 

Whether you choose a third-party solution or choose to roll your own, it's important to be knowledgeable about how encryption systems work. That way, you can decide if a particular framework you want to use is really secure. Therefore, the rest of this tutorial will focus on writing your own custom solution. With the knowledge you'll learn from this tutorial, you'll be able to tell if you're using a particular framework securely. 

We'll start with the creation of a secret key that will be used to encrypt your data.

Create a Key

A very common error in AES encryption is to use a user's password directly as the encryption key. What if the user decides to use a common or weak password? How do we force users to use a key that is random and strong enough (has enough entropy) for encryption and then have them remember it? 

The solution is key stretching. Key stretching derives a key from a password by hashing it many times over with a salt. The salt is just a sequence of random data, and it is a common mistake to omit this salt—the salt gives the key its vitally important entropy, and without the salt, the same key would be derived if the same password was used by someone else.

Without the salt, a dictionary of words could be used to deduce common keys, which could then be used to attack user data. This is called a "dictionary attack". Tables with common keys that correspond to unsalted passwords are used for this purpose. They're called "rainbow tables".

Another pitfall when creating a salt is to use a random number generating function that was not designed for security. An example is the rand() function in C, which can be accessed from Swift. This output can end up being very predictable! 

To create a secure salt,  we will use the function SecRandomCopyBytes to create cryptographically secure random bytes—which is to say, numbers that are difficult to predict. 

To use the code, you'll need to add the following into your bridging header:
#import <CommonCrypto/CommonCrypto.h>

Here is the start of the code that creates a salt. We will add to this code as we go along:

Now we are ready to do key stretching. Fortunately, we already have a function at our disposal to do the actual stretching: the Password-Based Key Derivation Function (PBKDF2). PBKDF2 performs a function many times over to derive the key; increasing the number of iterations expands the time it would take to operate on a set of keys during a brute force attack. It is recommended to use PBKDF2 to generate your key.

Server-Side Key

You may be wondering now about the cases where you don't want to require users to provide a password within your app. Perhaps they are already authenticating with a single sign-on scheme. In this case, have your server generate an AES 256-bit (32 byte) key using a secure generator. The key should be different for different users or devices. On authenticating with your server, you can pass the server a device or user ID over asecure connection, and it can send the corresponding key back. 

This scheme has a major difference. If the key is coming from the server, the entity that controls that server has the capacity to be able to read the encrypted data if the device or data were ever obtained. There is also the potential for the key to be leaked or exposed at a later time. 

On the other hand, if the key is derived from something only the user knows—the user's password—then only the user can decrypt that data. If you are protecting information such as private financial data, only the user should be able to unlock the data. If that information is known to the entity anyway, it may be acceptable to have the server unlock the content via a server-side key.

Modes and IVs

Now that we have a key, let's encrypt some data. There are different modes of encryption, but we'll be using the recommended mode: cipher block chaining (CBC). This operates on our data one block at a time. 

A common pitfall with CBC is the fact that each next unencrypted block of data is XOR’d with the previous encrypted block to make the encryption stronger. The problem here is that the first block is never as unique as all the others. If a message to be encrypted were to start off the same as another message to be encrypted, the beginning encrypted output would be the same, and that would give an attacker a clue to figuring out what the message might be. 

To get around this potential weakness, we'll start the data to be saved with what is called an initialization vector (IV): a block of random bytes. The IV will be XOR’d with the first block of user data, and since each block depends on all blocks processed up until that point, it will ensure that the entire message will be uniquely encrypted, even if it has the same data as another message. In other words, identical messages encrypted with the same key will not produce identical results. So while salts and IVs are considered public, they should not be sequential or reused. 

We will use the same secure SecRandomCopyBytes function to create the IV.

Putting It All Together

To complete our example, we'll use the CCCrypt function with either kCCEncrypt or kCCDecrypt. Because we are using a block cipher, if the message doesn’t fit nicely into a multiple of the block size, we will need to tell the function to automatically add padding to the end. 

As usual in encryption, it is best to follow established standards. In this case, the standard PKCS7 defines how to pad the data. We tell our encryption function to use this standard by supplying the KCCOptionPKCS7Padding option. Putting it all together, here is the full code to encrypt and decrypt a string.

And here is the decryption code:

Finally, here is a test to ensure that data is decrypted correctly after encryption:

In our example, we package all the necessary information and return it as a Dictionary so that all the pieces can later be used to successfully decrypt the data. You only need to store the IV and salt, either in the keychain or on your server.

Conclusion

This completes the three-part series on securing data at rest. We have seen how to properly store passwords, sensitive pieces of information, and large amounts of user data. These techniques are the baseline for protecting stored user information in your app.

It is a huge risk when a user's device is lost or stolen, especially with recent exploits to gain access to a locked device. While many system vulnerabilities are patched with a software update, the device itself is only as secure as the user's passcode and version of iOS. Therefore it is up to the developer of each app to provide strong protection of sensitive data being stored. 

All of the topics covered so far make use of Apple's frameworks. I will leave an idea with you to think about. What happens when Apple's encryption library gets attacked? 

When one commonly used security architecture is compromised, all of the apps that rely on it are also compromised. Any of iOS's dynamically linked libraries, especially on jailbroken devices, can be patched and swapped with malicious ones. 

However, a static library that is bundled with the binary of your app is protected from this kind of attack because if you try and patch it, you end up changing the app binary. This will break the code signature of the app, preventing it from being launched. If you imported and used, for example, OpenSSL for your encryption, your app would not be vulnerable to a widespread Apple API attack. You can compile OpenSSL yourself and statically link it into your app.

So there is always more to learn, and the future of app security on iOS is always evolving. The iOS security architecture even now supports cryptographic devices and smart cards! In closing, you now know the best practices for securing data at rest, so it's up to you to follow them!

In the meantime, check out some of our other content about iOS app development and app security.

2017-06-23T14:34:49.861Z2017-06-23T14:34:49.861ZCollin Stuart

Create a Mobile Application for Displaying Your Website RSS Content With Ionic

$
0
0
Final product image
What You'll Be Creating

In this tutorial we will take a look at creating a mobile application which displays the RSS content of a website. We will configure the RSS URL and the application will download it, parse it and display the posts from the RSS. 

To create the mobile application, we will use the Ionic Framework v1 together with AngularJS. To complete this tutorial, you need to have some experience with JavaScript and HTML. Also, it helps if you've worked with AngularJS before. 

If you have never worked with Ionic Framework before, I recommend at least taking a look at the Getting Started guide as it gives you a quick insight into how things work.

Let's begin!

Setting Up the Ionic Project

I will assume that you have installed Node on your system and you also have the npm (the Node package manager). Installing the Ionic framework is as easy as running the following:

This will install both Cordova and Ionic on your computer. 

Cordova is the core technology for Ionic, and basically it allows us to have an embedded browser in our mobile application. In that browser we will be able to run all our HTML and JavaScript code. This is called a hybrid mobile application, as the application does not run native code, but runs inside the browser. 

Next to Cordova, Ionic adds to that the possibility of using AngularJS for writing our code, and it also adds a very neat UI framework.

With Ionic in place, we can create our project using the Ionic CLI, a very useful command-line tool. Ionic provides three default project templates which can be used as a starting point:

  • blank: as the name says, it's an empty project with only the minimal necessary components in place.
  • tabs: an application using tabs for navigating through its screens.
  • sidemenu: an application using a standard mobile side menu for navigation.

For this tutorial, we will be using the tabs application template. To start our project, let's run:

Ionic will download and install all components needed, and it will create the project folder named myWebsiteOnMobile. Go into the project directory by running:

Because our application is a hybrid mobile application, we have the advantage of being able to run the application inside a browser. To do this, Ionic provides a neat built-in web server which runs our application like this:

This will open a browser with our application loaded, and it will look like this:

The Ionic homescreen

To stop the server, use Control-C on your command-line screen. To get a better idea of how the application looks on a mobile, you can use:

This will open the application in the browser, showing an iOS and an Android preview of the app side by side.

The iOS and Android Preview

The tabs Ionic application template has three tabs: Status, Chats, and Account. In the next steps we will adjust the application to fit our needs.

How to Adjust the Default Ionic Tabs Template Application

For our application we will have two tabs:

  • Latest posts: showing a list of latest posts retrieved from the RSS feed.
  • Settings: where the user will be able to configure several aspects of the application.

From the Latest posts tab, the user will be able to click on any of the latest posts and see more information about the post, with the possibility of opening up the post in an external browser.

Since our Latest posts tab is similar to the Chats tab provided by the template, we will reuse that together with the Account tab, which will become our Settings tab. We can do all modifications with the Ionic webserver running, and Ionic will reload the app for us. This is a very neat feature which will speed up development.

As mentioned before, Ionic uses AngularJS, and the whole application is actually an AngularJS module. The module is defined in www/js/app.js, and here is also where the paths or routes of the application are defined. Each screen of the application has a corresponding route.

Let's remove the Status tab since we will not need it. To do this, we first need to change the default screen (or route) of our application to point to the Chats screen, which will become our main screen. The default screen is configured via $urlRouterProvider.otherwise(), so let's change that to:

If we now reload http://localhost:8100 in our browser, we will see that the Chats tab will be loaded by default.

To remove the Status tab, we need to edit the www/templates/tabs.html file that holds the template for the tab component. We will remove the element:

When saving, we will see that the application now has only two tabs: Chats and Account.

While in the www/templates/tabs.html file we notice that there are some HTML tags that are not standard HTML, like ion-tabs, ion-tab, and ion-nav-view. These are actually AngularJS directives defined by the Ionic Framework. The directives are tags that pack functionality behind them, and they are very convenient ways to write more structured and more concise code.

In our case, the ion-tabs directive is the tabs component, which for each tab requires an ion-tab directive.

Let's change our tabs from Chat and Account to our required names Latest posts and Settings. To do this, we will modify several things in the www/templates/tabs.html file:

  • title attribute of the ion-tab elements which determines the text on the tab button. We will change that to Latest posts and Settings respectively.
  • href attribute of the ion-tab elements which points to the route or screen URL. We will change those to #/tab/latest-posts and #/tab/settings.
  • name attribute of the ion-nav-view elements to tab-latest-posts and tab-settings. These are the identifiers for the view templates used for the Latest posts and Settings screens.

As a result, www/templates/tabs.html should look like this:

After making these changes, we will get some errors. This is because we also have to adjust our routes to use the new identifiers we have used. In www/js/app.js, we need to change the state identifiers, the view identifiers and the url for each route according to what we have set above.

For each route (or screen), there is a controller defined. This is a basic MVC (Model-View-Controller) design pattern. Controllers are defined within the file www/js/controllers.js. For consistency purposes, we will change the names of the controllers in both www/js/app.js and www/js/controller.js:

  • ChatsCtrl becomes LatestPostsCtrl.
  • ChatDetailCtrl becomes PostDetailCtrl.
  • AccountCtrl becomes SettingsCtrl.

Also, for each route we have a view template defined, so let's change them too. Edit www/js/app.js and modify templateUrl like this:

  • Change tab-chats.html to tab-latest-posts.html. Also rename the file www/templates/tab-chats.html to www/templates/tab-latest-posts.html.
  • Change chat-detail.html to post-detail.html. Also rename the file www/templates/chat-detail.html to www/templates/post-detail.html.
  • Change tab-account.html to tab-settings.html. Also rename the file www/templates/tab-account.html to www/templates/tab-settings.html.
  • Finally, change the view that gets loaded by default to latest-posts by using $urlRouterProvider.otherwise('/tab/latest-posts').

If all went well then you should end up with the www/js/app.js file looking like this:

And our cleaned up www/js/controllers.js file looks like this:

Now that we have restructured the app to fit our needs, let's move on to the next part and add some functionality.

How to Retrieve an RSS Feed With Ionic

In order to display the list of latest posts, our application will need to retrieve the RSS feed from a URL. As a best practice, it is advisable that this kind of functionality reside in the service layer of the application. In this way we can use it more easily in our controller and then present it to the user by using a view.

The RSS service will make use of Yahoo's YQL REST API to retrieve the RSS of our website. To call on the REST API, we will use the $http provider offered by AngularJS.

Ionic services are usually defined in the www/js/services.js file, so that's where we will put ours too. The code will look like this:

We declare the service using the service() method provided by AngularJS. We then inject Angular's $http module so we can call it in our service.

The self variable is a reference to the RSS service so that we can call it from within the service's methods. The main method of the service is the download() method, which downloads the feed information and processes it. There are two main formats used for website feeds: RSS and ATOM. For our application, we have used the feed of tutorials from Tuts+ https://tutorials.tutsplus.com/posts.atom which is in ATOM format, but for completeness we have taken into account the RSS format too.

The download() method calls on the YQL API and parses the results using the parseAtom() or the parseRSS() methods depending on the type of feed. The idea here is to have the same output format which will be passed further via the callback next(). With the RSS service in place, we can move on to the controller.

Hooking the RSS Service to the Latest Posts Controller

In our www/js/controllers.js file, we need to load the RSS data and pass it to our view. To do that, we only need to modify our LatestPostsCtrl controller like this:

Using Angular's dependency injection mechanism, we only need to specify the $scope and RSS variables as method parameters, and it will know how to load those modules. The $scope module allows us to set variables on the model bound to the view. Any values set in the scope can be then retrieved and displayed inside the view associated with the controller.

When the view for latest posts is loaded, it will call on the LatestPostsCtrl controller, and this in turn will use the RSS service to download the feed information. The results are parsed and passed back as an array using the posts variable, which we store in the current scope.

With all that out of the way, we can now move on to the view part, displaying the list of posts retrieved from the feed.

Hooking the Latest Posts View to the Feed Data

We now need to modify our view for the latest posts. If you remember, this is configured in the www/js/app.js file via the templateUrl attribute, and it points to the www/templates/tab-latest-posts.html file.

What we will want to do is display the list of feeds. Since the feed information may contain HTML, and this will only clutter the list of latest posts, we need something to extract the text without the HTML tags from a post's content. The easiest way to do that is by defining an AngularJS filter that strips the HTML tags from text. Let's do that in www/js/services.js by adding:

No back to our view inside the www/templates/tab-latest-posts.html file, let's modify it to look like this:

We are using the Ionic list UI component together with Angular's ng-repeat directive, which will iterate through the posts set on the scope of our controller. For each post entry, we will have a list item with its title and with the description stripped of HTML tags by the use of the htmlToPlaintext filter. Also note that clicking a post should take us to the detail of the post because of the href attribute set to #/tab/latest-posts/{{post.id}}. That does not work yet, but we will take care of that in the next section.

If we now run the application using ionic serve --lab, we should get something like this:

Viewing Latest Posts

Showing the Details of a Post

When clicking on a post in the list, we go to the post details screen of the application. Because each screen of the application has its own controller and therefore its own scope, we can't access the list of posts to display a specific post. We can call the RSS service again, but that would be inefficient.

To solve this problem, we can make use of the $rootScope directive offered by Angular. This references a scope that overarches all controllers in the application. Let's modify our LatestPostCtrl to set the posts in the $rootScope and then search for the specific post that the user clicked in the PostDetailCtrl. The resulting code in www/js/controllers.js will look like this:

We simply injected $rootScope in both controllers and used it for passing posts between the two controllers. Please note that we don't need to make any changes in our latest posts view as $rootScope and $scope are both accessible in the same way from the view.

Inside the PostDetailCtrl controller, we simply search for the post with the id passed in the link clicked by the user. We do that by comparing each post ID with the value in the URL passed via the $stateParams.postId variable. If we find a match then we set the post on the scope so we can use it in our view.

Let's now adjust our post detail view www/templates/post-detail.html like this:

This is what we have done in the view:

  • We have placed the title of the post in the header of the screen.
  • We have placed an "Open" button in the header on the right. This button will open the post link in an external browser because of the attribute target="_system". We have to do this because the application is already running in a browser due to Cordova. If we didn't set that attribute, the post would have opened in the same browser as the application, and then we would not have a way to return to the application.
  • We display the description of the post as HTML by using Angular's ng-bind-html directive.

While running the application, I noticed that if the post description contains images, some of them fall off the screen. This might be the case with other HTML elements like videos. We can easily fix this by adding the following CSS rule in www/css/style.css.

If we now take a look at the application and click on one of the posts, we should see something like this:

Seeing posts for Tuts articles

And our application is almost complete. In the next section, we will take a look at implementing the settings screen.

Adding Settings for Our Ionic Application

For our settings screen, we will implement a way to indicate how many posts to display on the main screen of the application. We will store this setting in the localStorage memory, which is not erased when the application is closed. Let's edit the controllers file www/js/controllers.js and change the SettingsCtrl controller like this:

Also, we need to modify the settings screen in www/templates/tab-settings.html like this:

The controller retrieves the setting myWebsiteOnMobile.maxPosts from the localStorage. If it does not exist, it will be null, and we will consider that there is no limit for the maximum number of posts.

We call the $scope.$watch() method to monitor changes of the settings.maxPosts variable, which is bound to the radio control in the settings screen.

With all this in place, every time we change the maximum number of posts on the settings screen, the setting will be stored in the localStorage, and it will be retrieved from there when the application restarts.

Now let's make use of this setting. This is as simple as adding this in the LatestPostsCtrl from www/js/controllers.js:

And adding a directive in the latest posts screen www/templates/tab-latest-posts.html:

Notice the limitTo:maxPosts Angular filter. This will limit the number of posts displayed to the number taken from the localStorage. By default, this will be null, which will display all the feeds retrieved by the RSS service.

Congratulations! We now have a fully working application displaying an RSS feed.

Conclusion

In this tutorial, we have seen how to create a hybrid mobile application using the Ionic Framework and AngularJS. There is only one more thing to do: run the application on a mobile device or mobile emulator. This is very simple with Ionic. To run the application on an Android emulator, just run:

If you want to download a premade Ionic application template for transforming any website to a mobile application, try the Website to Mobile Ionic Application Template from CodeCanyon.

An application template on CodeCanyon
2017-06-27T12:00:17.000Z2017-06-27T12:00:17.000ZJohn Negoita

Create a Mobile Application for Displaying Your Website RSS Content With Ionic

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-28838
Final product image
What You'll Be Creating

In this tutorial we will take a look at creating a mobile application which displays the RSS content of a website. We will configure the RSS URL and the application will download it, parse it and display the posts from the RSS. 

To create the mobile application, we will use the Ionic Framework v1 together with AngularJS. To complete this tutorial, you need to have some experience with JavaScript and HTML. Also, it helps if you've worked with AngularJS before. 

If you have never worked with Ionic Framework before, I recommend at least taking a look at the Getting Started guide as it gives you a quick insight into how things work.

Let's begin!

Setting Up the Ionic Project

I will assume that you have installed Node on your system and you also have the npm (the Node package manager). Installing the Ionic framework is as easy as running the following:

This will install both Cordova and Ionic on your computer. 

Cordova is the core technology for Ionic, and basically it allows us to have an embedded browser in our mobile application. In that browser we will be able to run all our HTML and JavaScript code. This is called a hybrid mobile application, as the application does not run native code, but runs inside the browser. 

Next to Cordova, Ionic adds to that the possibility of using AngularJS for writing our code, and it also adds a very neat UI framework.

With Ionic in place, we can create our project using the Ionic CLI, a very useful command-line tool. Ionic provides three default project templates which can be used as a starting point:

  • blank: as the name says, it's an empty project with only the minimal necessary components in place.
  • tabs: an application using tabs for navigating through its screens.
  • sidemenu: an application using a standard mobile side menu for navigation.

For this tutorial, we will be using the tabs application template. To start our project, let's run:

Ionic will download and install all components needed, and it will create the project folder named myWebsiteOnMobile. Go into the project directory by running:

Because our application is a hybrid mobile application, we have the advantage of being able to run the application inside a browser. To do this, Ionic provides a neat built-in web server which runs our application like this:

This will open a browser with our application loaded, and it will look like this:

The Ionic homescreen

To stop the server, use Control-C on your command-line screen. To get a better idea of how the application looks on a mobile, you can use:

This will open the application in the browser, showing an iOS and an Android preview of the app side by side.

The iOS and Android Preview

The tabs Ionic application template has three tabs: Status, Chats, and Account. In the next steps we will adjust the application to fit our needs.

How to Adjust the Default Ionic Tabs Template Application

For our application we will have two tabs:

  • Latest posts: showing a list of latest posts retrieved from the RSS feed.
  • Settings: where the user will be able to configure several aspects of the application.

From the Latest posts tab, the user will be able to click on any of the latest posts and see more information about the post, with the possibility of opening up the post in an external browser.

Since our Latest posts tab is similar to the Chats tab provided by the template, we will reuse that together with the Account tab, which will become our Settings tab. We can do all modifications with the Ionic webserver running, and Ionic will reload the app for us. This is a very neat feature which will speed up development.

As mentioned before, Ionic uses AngularJS, and the whole application is actually an AngularJS module. The module is defined in www/js/app.js, and here is also where the paths or routes of the application are defined. Each screen of the application has a corresponding route.

Let's remove the Status tab since we will not need it. To do this, we first need to change the default screen (or route) of our application to point to the Chats screen, which will become our main screen. The default screen is configured via $urlRouterProvider.otherwise(), so let's change that to:

If we now reload http://localhost:8100 in our browser, we will see that the Chats tab will be loaded by default.

To remove the Status tab, we need to edit the www/templates/tabs.html file that holds the template for the tab component. We will remove the element:

When saving, we will see that the application now has only two tabs: Chats and Account.

While in the www/templates/tabs.html file we notice that there are some HTML tags that are not standard HTML, like ion-tabs, ion-tab, and ion-nav-view. These are actually AngularJS directives defined by the Ionic Framework. The directives are tags that pack functionality behind them, and they are very convenient ways to write more structured and more concise code.

In our case, the ion-tabs directive is the tabs component, which for each tab requires an ion-tab directive.

Let's change our tabs from Chat and Account to our required names Latest posts and Settings. To do this, we will modify several things in the www/templates/tabs.html file:

  • title attribute of the ion-tab elements which determines the text on the tab button. We will change that to Latest posts and Settings respectively.
  • href attribute of the ion-tab elements which points to the route or screen URL. We will change those to #/tab/latest-posts and #/tab/settings.
  • name attribute of the ion-nav-view elements to tab-latest-posts and tab-settings. These are the identifiers for the view templates used for the Latest posts and Settings screens.

As a result, www/templates/tabs.html should look like this:

After making these changes, we will get some errors. This is because we also have to adjust our routes to use the new identifiers we have used. In www/js/app.js, we need to change the state identifiers, the view identifiers and the url for each route according to what we have set above.

For each route (or screen), there is a controller defined. This is a basic MVC (Model-View-Controller) design pattern. Controllers are defined within the file www/js/controllers.js. For consistency purposes, we will change the names of the controllers in both www/js/app.js and www/js/controller.js:

  • ChatsCtrl becomes LatestPostsCtrl.
  • ChatDetailCtrl becomes PostDetailCtrl.
  • AccountCtrl becomes SettingsCtrl.

Also, for each route we have a view template defined, so let's change them too. Edit www/js/app.js and modify templateUrl like this:

  • Change tab-chats.html to tab-latest-posts.html. Also rename the file www/templates/tab-chats.html to www/templates/tab-latest-posts.html.
  • Change chat-detail.html to post-detail.html. Also rename the file www/templates/chat-detail.html to www/templates/post-detail.html.
  • Change tab-account.html to tab-settings.html. Also rename the file www/templates/tab-account.html to www/templates/tab-settings.html.
  • Finally, change the view that gets loaded by default to latest-posts by using $urlRouterProvider.otherwise('/tab/latest-posts').

If all went well then you should end up with the www/js/app.js file looking like this:

And our cleaned up www/js/controllers.js file looks like this:

Now that we have restructured the app to fit our needs, let's move on to the next part and add some functionality.

How to Retrieve an RSS Feed With Ionic

In order to display the list of latest posts, our application will need to retrieve the RSS feed from a URL. As a best practice, it is advisable that this kind of functionality reside in the service layer of the application. In this way we can use it more easily in our controller and then present it to the user by using a view.

The RSS service will make use of Yahoo's YQL REST API to retrieve the RSS of our website. To call on the REST API, we will use the $http provider offered by AngularJS.

Ionic services are usually defined in the www/js/services.js file, so that's where we will put ours too. The code will look like this:

We declare the service using the service() method provided by AngularJS. We then inject Angular's $http module so we can call it in our service.

The self variable is a reference to the RSS service so that we can call it from within the service's methods. The main method of the service is the download() method, which downloads the feed information and processes it. There are two main formats used for website feeds: RSS and ATOM. For our application, we have used the feed of tutorials from Tuts+ https://tutorials.tutsplus.com/posts.atom which is in ATOM format, but for completeness we have taken into account the RSS format too.

The download() method calls on the YQL API and parses the results using the parseAtom() or the parseRSS() methods depending on the type of feed. The idea here is to have the same output format which will be passed further via the callback next(). With the RSS service in place, we can move on to the controller.

Hooking the RSS Service to the Latest Posts Controller

In our www/js/controllers.js file, we need to load the RSS data and pass it to our view. To do that, we only need to modify our LatestPostsCtrl controller like this:

Using Angular's dependency injection mechanism, we only need to specify the $scope and RSS variables as method parameters, and it will know how to load those modules. The $scope module allows us to set variables on the model bound to the view. Any values set in the scope can be then retrieved and displayed inside the view associated with the controller.

When the view for latest posts is loaded, it will call on the LatestPostsCtrl controller, and this in turn will use the RSS service to download the feed information. The results are parsed and passed back as an array using the posts variable, which we store in the current scope.

With all that out of the way, we can now move on to the view part, displaying the list of posts retrieved from the feed.

Hooking the Latest Posts View to the Feed Data

We now need to modify our view for the latest posts. If you remember, this is configured in the www/js/app.js file via the templateUrl attribute, and it points to the www/templates/tab-latest-posts.html file.

What we will want to do is display the list of feeds. Since the feed information may contain HTML, and this will only clutter the list of latest posts, we need something to extract the text without the HTML tags from a post's content. The easiest way to do that is by defining an AngularJS filter that strips the HTML tags from text. Let's do that in www/js/services.js by adding:

No back to our view inside the www/templates/tab-latest-posts.html file, let's modify it to look like this:

We are using the Ionic list UI component together with Angular's ng-repeat directive, which will iterate through the posts set on the scope of our controller. For each post entry, we will have a list item with its title and with the description stripped of HTML tags by the use of the htmlToPlaintext filter. Also note that clicking a post should take us to the detail of the post because of the href attribute set to #/tab/latest-posts/{{post.id}}. That does not work yet, but we will take care of that in the next section.

If we now run the application using ionic serve --lab, we should get something like this:

Viewing Latest Posts

Showing the Details of a Post

When clicking on a post in the list, we go to the post details screen of the application. Because each screen of the application has its own controller and therefore its own scope, we can't access the list of posts to display a specific post. We can call the RSS service again, but that would be inefficient.

To solve this problem, we can make use of the $rootScope directive offered by Angular. This references a scope that overarches all controllers in the application. Let's modify our LatestPostCtrl to set the posts in the $rootScope and then search for the specific post that the user clicked in the PostDetailCtrl. The resulting code in www/js/controllers.js will look like this:

We simply injected $rootScope in both controllers and used it for passing posts between the two controllers. Please note that we don't need to make any changes in our latest posts view as $rootScope and $scope are both accessible in the same way from the view.

Inside the PostDetailCtrl controller, we simply search for the post with the id passed in the link clicked by the user. We do that by comparing each post ID with the value in the URL passed via the $stateParams.postId variable. If we find a match then we set the post on the scope so we can use it in our view.

Let's now adjust our post detail view www/templates/post-detail.html like this:

This is what we have done in the view:

  • We have placed the title of the post in the header of the screen.
  • We have placed an "Open" button in the header on the right. This button will open the post link in an external browser because of the attribute target="_system". We have to do this because the application is already running in a browser due to Cordova. If we didn't set that attribute, the post would have opened in the same browser as the application, and then we would not have a way to return to the application.
  • We display the description of the post as HTML by using Angular's ng-bind-html directive.

While running the application, I noticed that if the post description contains images, some of them fall off the screen. This might be the case with other HTML elements like videos. We can easily fix this by adding the following CSS rule in www/css/style.css.

If we now take a look at the application and click on one of the posts, we should see something like this:

Seeing posts for Tuts articles

And our application is almost complete. In the next section, we will take a look at implementing the settings screen.

Adding Settings for Our Ionic Application

For our settings screen, we will implement a way to indicate how many posts to display on the main screen of the application. We will store this setting in the localStorage memory, which is not erased when the application is closed. Let's edit the controllers file www/js/controllers.js and change the SettingsCtrl controller like this:

Also, we need to modify the settings screen in www/templates/tab-settings.html like this:

The controller retrieves the setting myWebsiteOnMobile.maxPosts from the localStorage. If it does not exist, it will be null, and we will consider that there is no limit for the maximum number of posts.

We call the $scope.$watch() method to monitor changes of the settings.maxPosts variable, which is bound to the radio control in the settings screen.

With all this in place, every time we change the maximum number of posts on the settings screen, the setting will be stored in the localStorage, and it will be retrieved from there when the application restarts.

Now let's make use of this setting. This is as simple as adding this in the LatestPostsCtrl from www/js/controllers.js:

And adding a directive in the latest posts screen www/templates/tab-latest-posts.html:

Notice the limitTo:maxPosts Angular filter. This will limit the number of posts displayed to the number taken from the localStorage. By default, this will be null, which will display all the feeds retrieved by the RSS service.

Congratulations! We now have a fully working application displaying an RSS feed.

Conclusion

In this tutorial, we have seen how to create a hybrid mobile application using the Ionic Framework and AngularJS. There is only one more thing to do: run the application on a mobile device or mobile emulator. This is very simple with Ionic. To run the application on an Android emulator, just run:

If you want to download a premade Ionic application template for transforming any website to a mobile application, try the Website to Mobile Ionic Application Template from CodeCanyon.

An application template on CodeCanyon
2017-06-27T12:00:17.000Z2017-06-27T12:00:17.000ZJohn Negoita

Code a Real-Time NativeScript App: Geolocation and Google Maps

$
0
0

NativeScript is a framework for building cross-platform native mobile apps using XML, CSS, and JavaScript. In this series, we'll try out some of the cool things you can do with a NativeScript app: geolocation and Google Maps integration, SQLite database, Firebase integration, and push notifications. Along the way, we'll build a fitness app with real-time capabilities that will use each of these features.

In this tutorial, you'll learn how to work with geolocation and Google Maps in NativeScript apps. 

I'm assuming that you already know how to create apps in NativeScript. If you're new to NativeScript, I recommend that you first check out one of the earlier tutorials in NativeScript before trying to follow this tutorial.

What You'll Be Creating

You'll be creating a walking tracker using geolocation and Google Maps. It will show the user how much distance they've covered and the number of steps they've taken to cover that distance. There will also be a map that will show the user's current location.

To give you an idea, here's what the final output will look like:

app final

Setting Up the Project

Start by creating a new NativeScript app:

To make it easier to set up the UI of the app, I've created a GitHub repo which includes both the starter and final version of the project. You can go ahead and copy the contents of the app folder to your project's app folder. We will only be working with two files: main-page.xml and main-page.js file. The rest is just boilerplate from the NativeScript demo project. 

Running the App

We will be using the Android emulator provided by Android Studio to test the app. This will allow us to use the Android GPS Emulator to simulate the changing of locations from the comfort of our own homes. I don't really like aimlessly walking around outside to test geolocation either! But if that's your thing then I won't stop you.

If you execute tns run android, it will automatically call the Android emulator if it's already installed. If it's not yet installed, you can install it by launching Android Studio, clicking configure, and selecting SDK Manager. This will open the SDK Platforms by default. Click on the SDK Tools tab and make sure to select Android Emulator, and click on Apply to install it.

To use the GPS emulator, download it from GitHub and run the executable war file:

Once that's done, you should be able to access http://localhost:8080/gpsemulator/ from your browser and connect to localhost. Make sure that the Android emulator is already running when you do this. Once you're connected, simply zoom in the map and click on any place you want to use as the location. The app will detect this and use it as its current location.

GPS Emulator

Working With Geolocation

Geolocation in NativeScript is similar to the Geolocation API in JavaScript. The only difference in functionality is the addition of a distance() function which is used for calculating the distance between two locations.

Installing the Geolocation Plugin

In order to work with geolocation, you first need to install the geolocation plugin:

Once that's done, you can now include it from your script files:

Getting the User's Current Location

The NativeScript geolocation plugin includes three functions which you can use for working with the user's current location. We will be using each of these in this app:

  • getCurrentLocation
  • watchLocation
  • distance

Open the main-view-model.js file and add the following code inside the createViewModel() function. Here we're initializing the variables that we will be using later on for storing the different values that are needed for keeping track of the user's location. 

I've added some comments in the code so you know what's going on. There are also some lines of code that are commented out; these are for the Google Maps integration. I've commented them out for now to keep things simple. Once we get to the Google Maps integration, you'll need to remove those comments.

Next, add the code for getting the user's current location. This code is executed when the user taps on the button for starting and stopping the location tracking. The geolocation.getCurrentLocation() method is used to get the current location. 

Here we've specified three options: desiredAccuracy, updateDistance, and timeoutdesiredAccuracy allows you to specify the accuracy in meters. It has two possible values: Accuracy.high, which is about 3 meters, and Accuracy.any, which is about 300 meters. updateDistance specifies how much difference (in meters) there must be between the previous location and the current location before it will update. Lastly, timeout specifies how many milliseconds to wait for a location. 

Once a location is received, we set it as the start_location and push it on the locations array. Later on, this location will be used along with the first location that will be fetched from watching the user's current location to determine the distance traveled.

Watching for the User's Current Location

To get the current location, we use the geolocation.watchLocation() function. This function is similar to the setInterval() function in JavaScript, because it also executes the callback function repeatedly until you stop it with the geolocation.clearWatch() function. The callback function is automatically called based on the updateDistance and minimumUpdateTime

In the code below, the location will be updated if it is at least 5 meters different from the previous location that was fetched. But this update will only happen every 5 seconds. This means that if the user hasn't walked 5 meters or more within 5 seconds, the location won't update. 

Once the user indicates that they want to stop tracking, you need to call the geolocation.clearWatch() function. You also need to reset the rest of the values that are being updated every time the location is changed. 

Getting the Distance Between Two Locations

Now we're ready to get the distance. This can be done by calling the geolocation.distance() function. This function accepts two location objects as its arguments, so we'll use the last two locations that were pushed to the locations array to determine the distance (in meters) traveled by the user from a previously recorded location to the current one. From there, we can use an approximate conversion from meters to the number of steps—I say approximate because not all people will travel the same distance in a single step. 

After that, we can just add the resulting distance and steps to the total_distance and total_steps so we can keep track of the total distance and steps they have taken since they started tracking their location.

At this point, you can now start testing the app using the GPS emulator that I mentioned earlier. Do note that you need to hit save on the main-view-model.js file to trigger an app reload. 

Then pick a location in the GPS emulator so that a fresh location will be fetched by the app once it loads. If you don't do this, it will default to the Googleplex location in Mountain View, California. This means that the next time you pick a location on the emulator, it will jump from this location to the location that you picked. If it's far away then you'll get a really large number for the distance and steps. 

Alternately, you could test on a real device with internet and GPS enabled. Only GPS is required at this point, but once we add Google Maps, the app will need an internet connection.

Working With Google Maps

We will now use Google Maps to add a map that shows the user's current location.

Installing the Google Maps Plugin

Once installed, you need to copy the template string resource files for Android:

Next, open the app/App_Resources/Android/values/nativescript_google_maps_api.xml file and add your own Google Maps API key (server key):

Make sure that you have enabled the Google Maps Android API from the Google Console before you try to use it.

Adding the Map

For the map, open the main-page.xml file and you should see the following:

Here we've specified three options (longitudelatitude, and zoom) and a function to execute once the map is ready. longitude and latitude specify the location you want to render in the map. The zoom specifies the zoom level of the map. mapReady is where we specify the function for adding the marker on the map. This marker represents the user's current location, so it will be rendered at the center of the map.

By default, this won't work as you haven't added the schema definition for the maps yet. So in your Page element, add the definition for the maps element:

Once that's done, a Google map instance should be rendered right below the button for tracking location. It won't have any maps yet since the latitude and longitude haven't been specified yet. To do that, go back to the main-view-model.js file and remove the comments for the lines of code for working with Google Maps:

Adding the Marker

Since we've already declared default coordinates for the marker, we can actually plot a marker once the map is ready:

Next, we need to update the marker position once the user starts tracking their location. You can do that inside the success callback function for the getCurrentLocation() function:

We also need update it when the user's location is updated (inside the success callback function for watchLocation):

Once that's done, a map which renders the default location should show in the app.

Conclusion

In this tutorial, you've created a NativeScript app that allows the user to track how much distance they have covered and the approximate number of steps they've taken to cover that distance. You've also used Google Maps to let the user view their current location. By doing so, you've learned how to use the geolocation and Google Maps plugins for NativeScript.

This is just the start! In the next posts of this series, we'll add a local database, push notifications and other cool features to our app.

In the meantime, check out some of our other posts on NativeScript and cross-platform mobile coding.

For a comprehensive introduction to NativeScript, try our video course Code a Mobile App With NativeScript. In this course, Keyvan Kasaei will show you step by step how to build a simple application. Along the way, you'll learn how to implement a simple app workflow with network requests, an MVVM architecture, and some of the most important NativeScript UI components. By the end, you'll understand why you should consider NativeScript for your next mobile app project.

 

2017-06-27T15:40:56.000Z2017-06-27T15:40:56.000ZWernher-Bel Ancheta

Code a Real-Time NativeScript App: Geolocation and Google Maps

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-29001

NativeScript is a framework for building cross-platform native mobile apps using XML, CSS, and JavaScript. In this series, we'll try out some of the cool things you can do with a NativeScript app: geolocation and Google Maps integration, SQLite database, Firebase integration, and push notifications. Along the way, we'll build a fitness app with real-time capabilities that will use each of these features.

In this tutorial, you'll learn how to work with geolocation and Google Maps in NativeScript apps. 

I'm assuming that you already know how to create apps in NativeScript. If you're new to NativeScript, I recommend that you first check out one of the earlier tutorials in NativeScript before trying to follow this tutorial.

What You'll Be Creating

You'll be creating a walking tracker using geolocation and Google Maps. It will show the user how much distance they've covered and the number of steps they've taken to cover that distance. There will also be a map that will show the user's current location.

To give you an idea, here's what the final output will look like:

app final

Setting Up the Project

Start by creating a new NativeScript app:

To make it easier to set up the UI of the app, I've created a GitHub repo which includes both the starter and final version of the project. You can go ahead and copy the contents of the app folder to your project's app folder. We will only be working with two files: main-page.xml and main-page.js file. The rest is just boilerplate from the NativeScript demo project. 

Running the App

We will be using the Android emulator provided by Android Studio to test the app. This will allow us to use the Android GPS Emulator to simulate the changing of locations from the comfort of our own homes. I don't really like aimlessly walking around outside to test geolocation either! But if that's your thing then I won't stop you.

If you execute tns run android, it will automatically call the Android emulator if it's already installed. If it's not yet installed, you can install it by launching Android Studio, clicking configure, and selecting SDK Manager. This will open the SDK Platforms by default. Click on the SDK Tools tab and make sure to select Android Emulator, and click on Apply to install it.

To use the GPS emulator, download it from GitHub and run the executable war file:

Once that's done, you should be able to access http://localhost:8080/gpsemulator/ from your browser and connect to localhost. Make sure that the Android emulator is already running when you do this. Once you're connected, simply zoom in the map and click on any place you want to use as the location. The app will detect this and use it as its current location.

GPS Emulator

Working With Geolocation

Geolocation in NativeScript is similar to the Geolocation API in JavaScript. The only difference in functionality is the addition of a distance() function which is used for calculating the distance between two locations.

Installing the Geolocation Plugin

In order to work with geolocation, you first need to install the geolocation plugin:

Once that's done, you can now include it from your script files:

Getting the User's Current Location

The NativeScript geolocation plugin includes three functions which you can use for working with the user's current location. We will be using each of these in this app:

  • getCurrentLocation
  • watchLocation
  • distance

Open the main-view-model.js file and add the following code inside the createViewModel() function. Here we're initializing the variables that we will be using later on for storing the different values that are needed for keeping track of the user's location. 

I've added some comments in the code so you know what's going on. There are also some lines of code that are commented out; these are for the Google Maps integration. I've commented them out for now to keep things simple. Once we get to the Google Maps integration, you'll need to remove those comments.

Next, add the code for getting the user's current location. This code is executed when the user taps on the button for starting and stopping the location tracking. The geolocation.getCurrentLocation() method is used to get the current location. 

Here we've specified three options: desiredAccuracy, updateDistance, and timeoutdesiredAccuracy allows you to specify the accuracy in meters. It has two possible values: Accuracy.high, which is about 3 meters, and Accuracy.any, which is about 300 meters. updateDistance specifies how much difference (in meters) there must be between the previous location and the current location before it will update. Lastly, timeout specifies how many milliseconds to wait for a location. 

Once a location is received, we set it as the start_location and push it on the locations array. Later on, this location will be used along with the first location that will be fetched from watching the user's current location to determine the distance traveled.

Watching for the User's Current Location

To get the current location, we use the geolocation.watchLocation() function. This function is similar to the setInterval() function in JavaScript, because it also executes the callback function repeatedly until you stop it with the geolocation.clearWatch() function. The callback function is automatically called based on the updateDistance and minimumUpdateTime

In the code below, the location will be updated if it is at least 5 meters different from the previous location that was fetched. But this update will only happen every 5 seconds. This means that if the user hasn't walked 5 meters or more within 5 seconds, the location won't update. 

Once the user indicates that they want to stop tracking, you need to call the geolocation.clearWatch() function. You also need to reset the rest of the values that are being updated every time the location is changed. 

Getting the Distance Between Two Locations

Now we're ready to get the distance. This can be done by calling the geolocation.distance() function. This function accepts two location objects as its arguments, so we'll use the last two locations that were pushed to the locations array to determine the distance (in meters) traveled by the user from a previously recorded location to the current one. From there, we can use an approximate conversion from meters to the number of steps—I say approximate because not all people will travel the same distance in a single step. 

After that, we can just add the resulting distance and steps to the total_distance and total_steps so we can keep track of the total distance and steps they have taken since they started tracking their location.

At this point, you can now start testing the app using the GPS emulator that I mentioned earlier. Do note that you need to hit save on the main-view-model.js file to trigger an app reload. 

Then pick a location in the GPS emulator so that a fresh location will be fetched by the app once it loads. If you don't do this, it will default to the Googleplex location in Mountain View, California. This means that the next time you pick a location on the emulator, it will jump from this location to the location that you picked. If it's far away then you'll get a really large number for the distance and steps. 

Alternately, you could test on a real device with internet and GPS enabled. Only GPS is required at this point, but once we add Google Maps, the app will need an internet connection.

Working With Google Maps

We will now use Google Maps to add a map that shows the user's current location.

Installing the Google Maps Plugin

Once installed, you need to copy the template string resource files for Android:

Next, open the app/App_Resources/Android/values/nativescript_google_maps_api.xml file and add your own Google Maps API key (server key):

Make sure that you have enabled the Google Maps Android API from the Google Console before you try to use it.

Adding the Map

For the map, open the main-page.xml file and you should see the following:

Here we've specified three options (longitudelatitude, and zoom) and a function to execute once the map is ready. longitude and latitude specify the location you want to render in the map. The zoom specifies the zoom level of the map. mapReady is where we specify the function for adding the marker on the map. This marker represents the user's current location, so it will be rendered at the center of the map.

By default, this won't work as you haven't added the schema definition for the maps yet. So in your Page element, add the definition for the maps element:

Once that's done, a Google map instance should be rendered right below the button for tracking location. It won't have any maps yet since the latitude and longitude haven't been specified yet. To do that, go back to the main-view-model.js file and remove the comments for the lines of code for working with Google Maps:

Adding the Marker

Since we've already declared default coordinates for the marker, we can actually plot a marker once the map is ready:

Next, we need to update the marker position once the user starts tracking their location. You can do that inside the success callback function for the getCurrentLocation() function:

We also need update it when the user's location is updated (inside the success callback function for watchLocation):

Once that's done, a map which renders the default location should show in the app.

Conclusion

In this tutorial, you've created a NativeScript app that allows the user to track how much distance they have covered and the approximate number of steps they've taken to cover that distance. You've also used Google Maps to let the user view their current location. By doing so, you've learned how to use the geolocation and Google Maps plugins for NativeScript.

This is just the start! In the next posts of this series, we'll add a local database, push notifications and other cool features to our app.

In the meantime, check out some of our other posts on NativeScript and cross-platform mobile coding.

For a comprehensive introduction to NativeScript, try our video course Code a Mobile App With NativeScript. In this course, Keyvan Kasaei will show you step by step how to build a simple application. Along the way, you'll learn how to implement a simple app workflow with network requests, an MVVM architecture, and some of the most important NativeScript UI components. By the end, you'll understand why you should consider NativeScript for your next mobile app project.

 

2017-06-27T15:40:56.000Z2017-06-27T15:40:56.000ZWernher-Bel Ancheta

Adding Physics-Based Animations to Android Apps

$
0
0

Animations that feel fluid and realistic tend to make user interfaces more attractive. No wonder Material Design places so much emphasis on them! 

If you've ever tried creating such animations, however, you know that the simple animators and interpolators offered by the Android SDK are often not good enough. That's why recent revisions of the Android Support Library come with a physics module called Dynamic Animation.

With Dynamic Animation, you can create physics-based animations that closely resemble the movements of objects in the real world. You can also make them respond to user actions in real time. In this tutorial, I'll show you how to create a few such animations.

Prerequisites

To follow along, make sure you have the following:

1. Adding Dependencies

To be able to use Dynamic Animation in your project, you must add it as an implementation dependency in your app module's build.gradle file:

In this tutorial, we're going to be animating an ImageView widget. It will, of course, have to display some images, so open the Vector Assets Studio and add the following Material icons to your project:

  • sentiment neutral
  • sentiment very satisfied

Here's what they look like:

Two Material icons

For best results, I suggest you set the size of the icons to 56 x 56 dp.

2. Creating a Fling Animation

When you fling an object in the real world, you give it a large momentum. Because momentum is nothing but the product of mass and velocity, the object will initially have a high velocity. Gradually, however, thanks to friction, it slows down until it stops moving completely. Using Dynamic Animation's FlingAnimation class, you can simulate this behavior inside your app.

For the sake of demonstration, let us now create a layout containing a flingable ImageView widget, displaying the ic_sentiment_neutral_black_56dp icon, and a Button widget users can press to trigger the fling animation. If you place them both inside a RelativeLayout widget, your layout XML file will look like this:

In the above code, you can see that the Button widget has an onClick attribute. By clicking on the red light-bulb icon Android Studio shows beside it, you can generate an associated on-click event handler inside your Activity class:

You can now create a new instance of the FlingAnimation class using its constructor, which expects a View object and the name of an animatable property. Dynamic Animation supports several animatable properties, such as scale, translation, rotation, and alpha.

The following code shows you how to create a FlingAnimation instance that can animate the X-coordinate of our layout's ImageView:

By default, a FlingAnimation instance is configured to use 0 pixels/second as its initial velocity. That means the animation would stop as soon as it's started. To simulate a realistic fling, you must always remember to call the setStartVelocity() method and pass a large value to it.

Additionally, you must understand that without friction, the animation will not stop. Therefore, you must also call the setFriction() method and pass a small number to it.

The following code configures the FlingAnimation instance such that the ImageView is not flung out of the bounds of the user's screen:

At this point, you can simply call the start() method to start the animation.

If you run the app now and press the button, you should be able to see the fling animation.

 

It is worth noting that you don't specify a duration or an end value when creating a physics-based animation—the animation stops automatically when it realizes that its target object is not showing any visible movement on the user's screen.

3. Simulating Springs

Dynamic Animation allows you to easily add spring dynamics to your animations. In other words, it can help you create animations that make your widgets bounce, stretch, and squash in ways that feel natural.

To keep things simple, let's now reuse our layout's ImageView and apply a spring-based animation to it. To allow the user to initiate the animation, however, you'll need to add another Button widget to the layout.

To create a spring-based animation, you must use the SpringAnimation class. Its constructor too expects a View object and an animatable property. The following code creates a SpringAnimation instance configured to animate the x-coordinate of the ImageView:

To control the behavior of a spring-based animation, you'll need a spring. You can create one using the SpringForce class, which allows you to specify the resting position of the spring, its damping ratio, and its stiffness. You can think of the damping ratio as a constant that, like friction, is responsible for slowing the animation down until it stops. The stiffness, on the other hand, specifies how much force is required to stretch the spring.

If all that sounds a bit too complicated, the good news is that the SpringForce class offers several intuitively named constants you can use to quickly configure your spring. For instance, the following code creates a spring that is both very bouncy and very flexible:

In the above code, you can see that we've set the value of the final resting position of the spring to the initial X-coordinate of the ImageView. With this configuration, you can imagine that the ImageView is attached to a tight, invisible rubber band, which quickly pulls the ImageView back to its original position every time it is moved.

You can now associate the spring with the SpringAnimation instance using the setSpring() method.

Lastly, before starting the animation, you must make sure you give it a large initial velocity using the setStartVelocity() method.

If you run the app now, you should see something like this:

 

4. Listening to Animation Events

An animation that's created using the Dynamic Animation library must always be started from the UI thread. You can also be sure that it will start as soon as you call the start() method. However, it runs asynchronously. Therefore, if you want to be notified when it ends, you must attach an OnAnimationEndListener object to it using the addEndListener() method.

To see the listener in action, let's change the Material icon the ImageView displays every time the spring-based animation, which we created in the previous step, starts and ends. I suggest you use the ic_sentiment_very_satisfied_black_56dp icon when the animation starts, and the ic_sentiment_neutral_black_56dp icon when it ends. The following code shows you how:

With the above code, the animation will look like this:

 

5. Animating Multiple Properties

The constructors of both the FlingAnimation and the SpringAnimation classes can only accept one animatable property. If you want to animate multiple properties at the same time, you can either create multiple instances of the classes, which can get cumbersome, or create a new custom property that encapsulates all your desired properties.

To create a custom animatable property, you must create a subclass of the FloatPropertyCompat class, which has two abstract methods: setValue() and getValue(). As you might have guessed, you can update the values of all your desired animatable properties inside the setValue() method. Inside the getValue() method, however, you must return the current value of any one property only. Because of this limitation, you'll usually have to make sure that the values of the encapsulated properties are not completely independent of each other.

For example, the following code shows you how to create a custom property called scale, which can uniformly animate both the SCALE_X and SCALE_Y properties of a widget:

Now that the custom property is ready, you can use it like any other animatable property. The following code shows you how to create a SpringAnimation object with it:

While creating an animation that uses a custom property, it is a good idea to also call the setMinimumVisibleChange() method and pass a meaningful value to it in order to make sure that the animation doesn't consume too many CPU cycles. For our animation, which scales a widget, you can use the following code:

Here's what the custom property animation looks like:

 

Conclusion

You now know the basics of working with Dynamic Animation. With the techniques you learned in this tutorial, you can create convincing physics-based animations in a matter of minutes, even if you have very little knowledge of Newtonian physics.

To learn more about Dynamic Animation, refer to the official documentation. In the meantime, check out some of our other recent posts on Android app development!

2017-06-28T16:10:29.000Z2017-06-28T16:10:29.000ZAshraff Hathibelagal

Adding Physics-Based Animations to Android Apps

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-29053

Animations that feel fluid and realistic tend to make user interfaces more attractive. No wonder Material Design places so much emphasis on them! 

If you've ever tried creating such animations, however, you know that the simple animators and interpolators offered by the Android SDK are often not good enough. That's why recent revisions of the Android Support Library come with a physics module called Dynamic Animation.

With Dynamic Animation, you can create physics-based animations that closely resemble the movements of objects in the real world. You can also make them respond to user actions in real time. In this tutorial, I'll show you how to create a few such animations.

Prerequisites

To follow along, make sure you have the following:

1. Adding Dependencies

To be able to use Dynamic Animation in your project, you must add it as an implementation dependency in your app module's build.gradle file:

In this tutorial, we're going to be animating an ImageView widget. It will, of course, have to display some images, so open the Vector Assets Studio and add the following Material icons to your project:

  • sentiment neutral
  • sentiment very satisfied

Here's what they look like:

Two Material icons

For best results, I suggest you set the size of the icons to 56 x 56 dp.

2. Creating a Fling Animation

When you fling an object in the real world, you give it a large momentum. Because momentum is nothing but the product of mass and velocity, the object will initially have a high velocity. Gradually, however, thanks to friction, it slows down until it stops moving completely. Using Dynamic Animation's FlingAnimation class, you can simulate this behavior inside your app.

For the sake of demonstration, let us now create a layout containing a flingable ImageView widget, displaying the ic_sentiment_neutral_black_56dp icon, and a Button widget users can press to trigger the fling animation. If you place them both inside a RelativeLayout widget, your layout XML file will look like this:

In the above code, you can see that the Button widget has an onClick attribute. By clicking on the red light-bulb icon Android Studio shows beside it, you can generate an associated on-click event handler inside your Activity class:

You can now create a new instance of the FlingAnimation class using its constructor, which expects a View object and the name of an animatable property. Dynamic Animation supports several animatable properties, such as scale, translation, rotation, and alpha.

The following code shows you how to create a FlingAnimation instance that can animate the X-coordinate of our layout's ImageView:

By default, a FlingAnimation instance is configured to use 0 pixels/second as its initial velocity. That means the animation would stop as soon as it's started. To simulate a realistic fling, you must always remember to call the setStartVelocity() method and pass a large value to it.

Additionally, you must understand that without friction, the animation will not stop. Therefore, you must also call the setFriction() method and pass a small number to it.

The following code configures the FlingAnimation instance such that the ImageView is not flung out of the bounds of the user's screen:

At this point, you can simply call the start() method to start the animation.

If you run the app now and press the button, you should be able to see the fling animation.

 

It is worth noting that you don't specify a duration or an end value when creating a physics-based animation—the animation stops automatically when it realizes that its target object is not showing any visible movement on the user's screen.

3. Simulating Springs

Dynamic Animation allows you to easily add spring dynamics to your animations. In other words, it can help you create animations that make your widgets bounce, stretch, and squash in ways that feel natural.

To keep things simple, let's now reuse our layout's ImageView and apply a spring-based animation to it. To allow the user to initiate the animation, however, you'll need to add another Button widget to the layout.

To create a spring-based animation, you must use the SpringAnimation class. Its constructor too expects a View object and an animatable property. The following code creates a SpringAnimation instance configured to animate the x-coordinate of the ImageView:

To control the behavior of a spring-based animation, you'll need a spring. You can create one using the SpringForce class, which allows you to specify the resting position of the spring, its damping ratio, and its stiffness. You can think of the damping ratio as a constant that, like friction, is responsible for slowing the animation down until it stops. The stiffness, on the other hand, specifies how much force is required to stretch the spring.

If all that sounds a bit too complicated, the good news is that the SpringForce class offers several intuitively named constants you can use to quickly configure your spring. For instance, the following code creates a spring that is both very bouncy and very flexible:

In the above code, you can see that we've set the value of the final resting position of the spring to the initial X-coordinate of the ImageView. With this configuration, you can imagine that the ImageView is attached to a tight, invisible rubber band, which quickly pulls the ImageView back to its original position every time it is moved.

You can now associate the spring with the SpringAnimation instance using the setSpring() method.

Lastly, before starting the animation, you must make sure you give it a large initial velocity using the setStartVelocity() method.

If you run the app now, you should see something like this:

 

4. Listening to Animation Events

An animation that's created using the Dynamic Animation library must always be started from the UI thread. You can also be sure that it will start as soon as you call the start() method. However, it runs asynchronously. Therefore, if you want to be notified when it ends, you must attach an OnAnimationEndListener object to it using the addEndListener() method.

To see the listener in action, let's change the Material icon the ImageView displays every time the spring-based animation, which we created in the previous step, starts and ends. I suggest you use the ic_sentiment_very_satisfied_black_56dp icon when the animation starts, and the ic_sentiment_neutral_black_56dp icon when it ends. The following code shows you how:

With the above code, the animation will look like this:

 

5. Animating Multiple Properties

The constructors of both the FlingAnimation and the SpringAnimation classes can only accept one animatable property. If you want to animate multiple properties at the same time, you can either create multiple instances of the classes, which can get cumbersome, or create a new custom property that encapsulates all your desired properties.

To create a custom animatable property, you must create a subclass of the FloatPropertyCompat class, which has two abstract methods: setValue() and getValue(). As you might have guessed, you can update the values of all your desired animatable properties inside the setValue() method. Inside the getValue() method, however, you must return the current value of any one property only. Because of this limitation, you'll usually have to make sure that the values of the encapsulated properties are not completely independent of each other.

For example, the following code shows you how to create a custom property called scale, which can uniformly animate both the SCALE_X and SCALE_Y properties of a widget:

Now that the custom property is ready, you can use it like any other animatable property. The following code shows you how to create a SpringAnimation object with it:

While creating an animation that uses a custom property, it is a good idea to also call the setMinimumVisibleChange() method and pass a meaningful value to it in order to make sure that the animation doesn't consume too many CPU cycles. For our animation, which scales a widget, you can use the following code:

Here's what the custom property animation looks like:

 

Conclusion

You now know the basics of working with Dynamic Animation. With the techniques you learned in this tutorial, you can create convincing physics-based animations in a matter of minutes, even if you have very little knowledge of Newtonian physics.

To learn more about Dynamic Animation, refer to the official documentation. In the meantime, check out some of our other recent posts on Android app development!

2017-06-28T16:10:29.000Z2017-06-28T16:10:29.000ZAshraff Hathibelagal

Faster Logins With Password AutoFill in iOS 11

$
0
0
Final product image
What You'll Be Creating

Password AutoFill in iOS 11

Logging in is the first step that a user has to take when they start with an app that requires an account. This usually takes several seconds if the user remembers their credentials and is able to type them right away. Other users, instead, may have to switch to their preferred password manager service (iCloud Keychain1PasswordLastPass, etc.) to copy their username and password. Needless to say, this interaction slows down users, and some of them will simply drop out of the process.

There have been some attempts to improve this experience. 1Password, for example, offers a nice extension that app developers can take advantage of. Another solution already included in iOS since WWDC 2014 is Safari Shared Credentials

In iOS 11, though, Apple has introduced an even more seamless way to streamline the login process: the new Password AutoFill API. Compared to the previous solutions, it is easier for users to use, and faster for developers to implement.

In this post you'll learn how to speed up the login process and improve user retention with Password AutoFill, a new API introduced in iOS 11.

Introduction

Password AutoFill allows users to fill in their login credentials directly into your app by interacting with the QuickType bar which is shown above the keyboard. Improving the login flow will increase your user retention as well as your app's reputation. After this tutorial, you will be able to shorten the login flow duration to just a few seconds. 

There are two steps to implement Password AutoFill in your app:

  • Show the QuickType bar with the key icon and let users manually choose the correct login.
  • Optionally link together your app and website in a secure way, so that the QuickType bar can suggest the correct login to the user to speed up the process even further.

The QuickType Bar

The first step is to make the QuickType bar appear with the key button. After this step, users will be able to tap it and manually select the correct login from the presented view controller. The only property required to make the QuickType bar appear is to set the textContent property in your UITextField or UITextView object. If you have a custom control that conforms to <UITextInput>, the same code will apply.

You should add this property to your email/username and password fields. One common implementation would be the following:

iOS will show the QuickType bar on all devices running iOS 11 when at least one password is saved in the keychain. If you're testing on the Simulator and you don't see the QuickType bar appearing, it's most likely because your keychain is empty.

The QuickType bar with the simple key icon

After the user presses the key icon and authenticates via Touch ID, a list of all saved passwords is presented. The user can search or scroll through, and when the right credentials are found, with a single tap the login fields will be filled in.

As you can see, the slowest part in this process is finding the correct login in the keychain. In the next section we'll see how we can remove this step and improve the experience even more.

Credentials Suggestions

You can also tell iOS the website with which your app is associated. If the keychain contains credentials saved from Safari on iOS or macOS, those credentials will be suggested—eliminating the hassle of manually searching them in the keychain.

If you're using Universal Links already, your app should show the credentials for your website in the QuickType bar. iOS knows which website is associated with your app, so it is 100% ready to suggest credentials.

Another way to strongly link your app and website together, without needing Universal Links, is a web credentials associated domain service. 

Switch to your Xcode project settings, go to the Capabilities tab, and turn on Associated Domains. Add your website URL here. Let's say your website domain name is amazingwebsite.com: the listed domain name should be webcredentials:amazingwebsite.com.

Xcode Capabilities section with Associated Domains turned on

That's it for the configuration in the Xcode project. iOS now knows your app's associated website. The last step is to upload a file to your server, so that iOS can verify that you own the website that you are trying to associate with the app. (This is to prevent malicious apps from stealing credentials from other web sites.)

Create a new text file (outside of your Xcode project if you prefer) named apple-app-site-association. This is a standard name that iOS looks for on your server using a secure connection (you must have SSL set up on your server). The content of the file is also pretty standard. Just copy and paste the following code.

You should change the string in the apps array to be your Team ID (which can be found in the developer portal under the membership section), followed by a period and the app's bundle identifier. Create a folder named .well-known in the root directory of your server and upload the file to it.

To make sure everything went as expected, check in a web browser if the file exists at the specified address. This is my address for example: https://patrickbalestra.com/.well-known/apple-app-site-association

If you see the JSON file correctly, like in the following image, you're all set.

Site association JSON file content

Launch the app and notice that the QuickType bar suggests your website credentials so that you can log in with a single tap.

Credentials suggestion in the QuickType bar

If you want to learn more about Password AutoFill, check out Session 206 at WWDC 2017.

Conclusion

As we have just seen, implementing Password AutoFill is very easy. You should consider taking a few minutes to implement it for the convenience of your users and business. It will speed up the login process and improve your app's retention.

Stay tuned for new tutorials covering the new iOS 11 APIs, and in the meantime, check out some of our other posts on iOS app development.

2017-06-30T14:17:13.000Z2017-06-30T14:17:13.000ZPatrick Balestra

Faster Logins With Password AutoFill in iOS 11

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-29096
Final product image
What You'll Be Creating

Password AutoFill in iOS 11

Logging in is the first step that a user has to take when they start with an app that requires an account. This usually takes several seconds if the user remembers their credentials and is able to type them right away. Other users, instead, may have to switch to their preferred password manager service (iCloud Keychain1PasswordLastPass, etc.) to copy their username and password. Needless to say, this interaction slows down users, and some of them will simply drop out of the process.

There have been some attempts to improve this experience. 1Password, for example, offers a nice extension that app developers can take advantage of. Another solution already included in iOS since WWDC 2014 is Safari Shared Credentials

In iOS 11, though, Apple has introduced an even more seamless way to streamline the login process: the new Password AutoFill API. Compared to the previous solutions, it is easier for users to use, and faster for developers to implement.

In this post you'll learn how to speed up the login process and improve user retention with Password AutoFill, a new API introduced in iOS 11.

Introduction

Password AutoFill allows users to fill in their login credentials directly into your app by interacting with the QuickType bar which is shown above the keyboard. Improving the login flow will increase your user retention as well as your app's reputation. After this tutorial, you will be able to shorten the login flow duration to just a few seconds. 

There are two steps to implement Password AutoFill in your app:

  • Show the QuickType bar with the key icon and let users manually choose the correct login.
  • Optionally link together your app and website in a secure way, so that the QuickType bar can suggest the correct login to the user to speed up the process even further.

The QuickType Bar

The first step is to make the QuickType bar appear with the key button. After this step, users will be able to tap it and manually select the correct login from the presented view controller. The only property required to make the QuickType bar appear is to set the textContent property in your UITextField or UITextView object. If you have a custom control that conforms to <UITextInput>, the same code will apply.

You should add this property to your email/username and password fields. One common implementation would be the following:

iOS will show the QuickType bar on all devices running iOS 11 when at least one password is saved in the keychain. If you're testing on the Simulator and you don't see the QuickType bar appearing, it's most likely because your keychain is empty.

The QuickType bar with the simple key icon

After the user presses the key icon and authenticates via Touch ID, a list of all saved passwords is presented. The user can search or scroll through, and when the right credentials are found, with a single tap the login fields will be filled in.

As you can see, the slowest part in this process is finding the correct login in the keychain. In the next section we'll see how we can remove this step and improve the experience even more.

Credentials Suggestions

You can also tell iOS the website with which your app is associated. If the keychain contains credentials saved from Safari on iOS or macOS, those credentials will be suggested—eliminating the hassle of manually searching them in the keychain.

If you're using Universal Links already, your app should show the credentials for your website in the QuickType bar. iOS knows which website is associated with your app, so it is 100% ready to suggest credentials.

Another way to strongly link your app and website together, without needing Universal Links, is a web credentials associated domain service. 

Switch to your Xcode project settings, go to the Capabilities tab, and turn on Associated Domains. Add your website URL here. Let's say your website domain name is amazingwebsite.com: the listed domain name should be webcredentials:amazingwebsite.com.

Xcode Capabilities section with Associated Domains turned on

That's it for the configuration in the Xcode project. iOS now knows your app's associated website. The last step is to upload a file to your server, so that iOS can verify that you own the website that you are trying to associate with the app. (This is to prevent malicious apps from stealing credentials from other web sites.)

Create a new text file (outside of your Xcode project if you prefer) named apple-app-site-association. This is a standard name that iOS looks for on your server using a secure connection (you must have SSL set up on your server). The content of the file is also pretty standard. Just copy and paste the following code.

You should change the string in the apps array to be your Team ID (which can be found in the developer portal under the membership section), followed by a period and the app's bundle identifier. Create a folder named .well-known in the root directory of your server and upload the file to it.

To make sure everything went as expected, check in a web browser if the file exists at the specified address. This is my address for example: https://patrickbalestra.com/.well-known/apple-app-site-association

If you see the JSON file correctly, like in the following image, you're all set.

Site association JSON file content

Launch the app and notice that the QuickType bar suggests your website credentials so that you can log in with a single tap.

Credentials suggestion in the QuickType bar

If you want to learn more about Password AutoFill, check out Session 206 at WWDC 2017.

Conclusion

As we have just seen, implementing Password AutoFill is very easy. You should consider taking a few minutes to implement it for the convenience of your users and business. It will speed up the login process and improve your app's retention.

Stay tuned for new tutorials covering the new iOS 11 APIs, and in the meantime, check out some of our other posts on iOS app development.

2017-06-30T14:17:13.000Z2017-06-30T14:17:13.000ZPatrick Balestra

Code a Real-Time NativeScript App: SQLite

$
0
0

NativeScript is a framework for building cross-platform native mobile apps using XML, CSS, and JavaScript. In this series, we're trying out some of the cool things you can do with a NativeScript app: geolocation and Google Maps integration, SQLite database, Firebase integration, and push notifications. Along the way, we're building a fitness app with real-time capabilities that will use each of these features.

In this tutorial, you'll learn how to integrate a SQLite database into the app to store data locally. Specifically, we'll be storing the walking sessions data that we gathered in the previous tutorial.

What You'll Be Creating

Picking up from the previous tutorial, you'll be adding a tab view for displaying the different portions of the app. Previously our app just had the Tracking page, so we didn't need tabs. In this post, we'll be adding the Walks page. This page will display the user's walking sessions. A new data point will be added here every time the user tracks their walking session. There will also be a function for clearing the data.

Here's what the final output will look like:

SQL Lite Final Output

Setting Up the Project

If you have followed the previous tutorial on geolocation, you can simply use the same project and build the features that we will be adding in this tutorial. Otherwise, you can create a new project and copy the starter files into your project's app folder.

After that, you also need to install the geolocation and Google Maps plugins:

Once installed, you need to configure the Google Maps plugin. You can read the complete instructions on how to do this by reading the section on Installing the Google Maps Plugin in the previous tutorial.

Once all of those are done, you should be ready to follow along with this tutorial.

Running the Project

You can run the project by executing tns run android. But since this app will build on the geolocation functionality, I recommend you use a GPS emulator for quickly setting and changing your location. You can read about how to do so in the section on Running the App in the previous tutorial

Installing the SQLite Plugin

The first thing that you need to do to start working with SQLite is to install the plugin:

This allows you to do things like connecting to a database and doing CRUD (create, read, update, delete) operations on it.

Connecting to the Database

Open the main-page.js file and import the SQLite plugin:

You can now connect to the database:

The walks.db file was created from the terminal using the touch command, so it's just an empty file. Copy it into the app folder.

If it successfully connected, the promise's resolve function will be executed. Inside that, we run the SQL statement for creating the walks table. To keep things simple, all we need to save is the total distance covered (in meters) and the total steps, as well as the start and end timestamps. 

Once the query executes successfully, we pass the database instance (db) into the page context. This will allow us to use it from the main-view-model.js file later on.

Fetching Data

Now we're ready to work with the data. But since we'll be working with dates, we first need to install a library called fecha. This allows us to easily parse and format dates:

Once it's installed, open the main-view-model.js file and include the library:

Next is the code for checking if geolocation is enabled. First, create a variable (walk_id) for storing the ID of a walking record. We need this because the app will immediately insert a new walk record into the walks table when the user starts location tracking. walk_id will store the ID that's auto-generated by SQLite so that we'll be able to update the record once the user stops tracking.

Next, get the current month and year. We'll use it to query the table so it only returns records that are in the same month and year. This allows us to limit the number of records that appear in the UI.

We also need a variable for storing the start timestamp. We'll use it later on to update the UI. This is because we're only querying the table once when the app is loaded, so we need to manually update the UI of any new data which becomes available. And since the starting timestamp will only have a value when the user starts tracking, we need to initialize it outside the scope so we can update or access its value later on.

Initialize the walks data that will be displayed in the UI:

Get the data from the walks table using the all() method. Here, we're supplying the month and the year as query parameters. The strftime() function is used to extract the month and year part of the start_datetime

Once a success response is returned, we loop through the result set so that we can format the data correctly. Note that the indexes in which we access the individual values depend on the table structure that was described earlier in the main-page.js file. The first column is ID, the second is the total distance, and so on.

The formatted data is then pushed to the walks array and is used to update the UI. has_walks is used as a toggle for the UI so that we can show or hide things based on its value.

This will supply the data for the ListView in the main-page.xml file:

Saving Data

Once the user starts tracking, set the current datetime as the start_datetime and insert initial values into the table using the execSQL() function. Just like the all() function, this expects the SQL query as the first argument and an array of parameters as the second.

If the query is successful, it should return the auto-generated ID for the inserted record. We then assign it as the value for the walk_id so it can be used later on to update this specific record.

Once the user stops tracking, we again get the current timestamp and format it accordingly for storage:

Since we've ordered the results from most to least recent, we use unshift() (instead of push()) to add the new item to the top of the walks array.

After that, we once again we use the execSQL() function to update the record that we inserted earlier:

Be sure to move the code for resetting the tracking UI (to reset the total distance and steps) inside the promise's resolve function so you can easily test whether the update query executed successfully or not. 

Clearing Data

Deleting data is done by clicking on the Clear Data button below the list of walk data:

In the main-view-model.js file, add the code for deleting all the data from the walks table. If you're used to MySQL, you might be wondering why we're using the DELETE query instead of TRUNCATE for emptying the table. Well, that's because SQLite doesn't have the TRUNCATE function. That's why we have to use the DELETE query without supplying a condition so that it will delete all the records that are currently in the table. 

Conclusion

In this tutorial, you've learned how to locally store data in your NativeScript apps using the SQLite plugin. As you have seen, SQLite allows you to reuse your existing SQL skills in managing a local database. It's important to note that not all functions that you're used to in MySQL are supported in SQLite. So it's always wise to consult the documentation if you're not sure whether a certain function is supported or not. 

If you want to learn about other options for storing data in NativeScript apps, I recommend you read this article on Going Offline With NativeScript.

In the final post of this series, we'll add push notifications to our app.

In the meantime, check out some of our other posts on NativeScript and cross-platform mobile coding.

For a comprehensive introduction to NativeScript, try our video course Code a Mobile App With NativeScript. In this course, Keyvan Kasaei will show you step by step how to build a simple application. Along the way, you'll learn how to implement a simple app workflow with network requests, an MVVM architecture, and some of the most important NativeScript UI components. By the end, you'll understand why you should consider NativeScript for your next mobile app project.

 
2017-07-03T12:00:00.000Z2017-07-03T12:00:00.000ZWernher-Bel Ancheta

Code a Real-Time NativeScript App: SQLite

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-29057

NativeScript is a framework for building cross-platform native mobile apps using XML, CSS, and JavaScript. In this series, we're trying out some of the cool things you can do with a NativeScript app: geolocation and Google Maps integration, SQLite database, Firebase integration, and push notifications. Along the way, we're building a fitness app with real-time capabilities that will use each of these features.

In this tutorial, you'll learn how to integrate a SQLite database into the app to store data locally. Specifically, we'll be storing the walking sessions data that we gathered in the previous tutorial.

What You'll Be Creating

Picking up from the previous tutorial, you'll be adding a tab view for displaying the different portions of the app. Previously our app just had the Tracking page, so we didn't need tabs. In this post, we'll be adding the Walks page. This page will display the user's walking sessions. A new data point will be added here every time the user tracks their walking session. There will also be a function for clearing the data.

Here's what the final output will look like:

SQL Lite Final Output

Setting Up the Project

If you have followed the previous tutorial on geolocation, you can simply use the same project and build the features that we will be adding in this tutorial. Otherwise, you can create a new project and copy the starter files into your project's app folder.

After that, you also need to install the geolocation and Google Maps plugins:

Once installed, you need to configure the Google Maps plugin. You can read the complete instructions on how to do this by reading the section on Installing the Google Maps Plugin in the previous tutorial.

Once all of those are done, you should be ready to follow along with this tutorial.

Running the Project

You can run the project by executing tns run android. But since this app will build on the geolocation functionality, I recommend you use a GPS emulator for quickly setting and changing your location. You can read about how to do so in the section on Running the App in the previous tutorial

Installing the SQLite Plugin

The first thing that you need to do to start working with SQLite is to install the plugin:

This allows you to do things like connecting to a database and doing CRUD (create, read, update, delete) operations on it.

Connecting to the Database

Open the main-page.js file and import the SQLite plugin:

You can now connect to the database:

The walks.db file was created from the terminal using the touch command, so it's just an empty file. Copy it into the app folder.

If it successfully connected, the promise's resolve function will be executed. Inside that, we run the SQL statement for creating the walks table. To keep things simple, all we need to save is the total distance covered (in meters) and the total steps, as well as the start and end timestamps. 

Once the query executes successfully, we pass the database instance (db) into the page context. This will allow us to use it from the main-view-model.js file later on.

Fetching Data

Now we're ready to work with the data. But since we'll be working with dates, we first need to install a library called fecha. This allows us to easily parse and format dates:

Once it's installed, open the main-view-model.js file and include the library:

Next is the code for checking if geolocation is enabled. First, create a variable (walk_id) for storing the ID of a walking record. We need this because the app will immediately insert a new walk record into the walks table when the user starts location tracking. walk_id will store the ID that's auto-generated by SQLite so that we'll be able to update the record once the user stops tracking.

Next, get the current month and year. We'll use it to query the table so it only returns records that are in the same month and year. This allows us to limit the number of records that appear in the UI.

We also need a variable for storing the start timestamp. We'll use it later on to update the UI. This is because we're only querying the table once when the app is loaded, so we need to manually update the UI of any new data which becomes available. And since the starting timestamp will only have a value when the user starts tracking, we need to initialize it outside the scope so we can update or access its value later on.

Initialize the walks data that will be displayed in the UI:

Get the data from the walks table using the all() method. Here, we're supplying the month and the year as query parameters. The strftime() function is used to extract the month and year part of the start_datetime

Once a success response is returned, we loop through the result set so that we can format the data correctly. Note that the indexes in which we access the individual values depend on the table structure that was described earlier in the main-page.js file. The first column is ID, the second is the total distance, and so on.

The formatted data is then pushed to the walks array and is used to update the UI. has_walks is used as a toggle for the UI so that we can show or hide things based on its value.

This will supply the data for the ListView in the main-page.xml file:

Saving Data

Once the user starts tracking, set the current datetime as the start_datetime and insert initial values into the table using the execSQL() function. Just like the all() function, this expects the SQL query as the first argument and an array of parameters as the second.

If the query is successful, it should return the auto-generated ID for the inserted record. We then assign it as the value for the walk_id so it can be used later on to update this specific record.

Once the user stops tracking, we again get the current timestamp and format it accordingly for storage:

Since we've ordered the results from most to least recent, we use unshift() (instead of push()) to add the new item to the top of the walks array.

After that, we once again we use the execSQL() function to update the record that we inserted earlier:

Be sure to move the code for resetting the tracking UI (to reset the total distance and steps) inside the promise's resolve function so you can easily test whether the update query executed successfully or not. 

Clearing Data

Deleting data is done by clicking on the Clear Data button below the list of walk data:

In the main-view-model.js file, add the code for deleting all the data from the walks table. If you're used to MySQL, you might be wondering why we're using the DELETE query instead of TRUNCATE for emptying the table. Well, that's because SQLite doesn't have the TRUNCATE function. That's why we have to use the DELETE query without supplying a condition so that it will delete all the records that are currently in the table. 

Conclusion

In this tutorial, you've learned how to locally store data in your NativeScript apps using the SQLite plugin. As you have seen, SQLite allows you to reuse your existing SQL skills in managing a local database. It's important to note that not all functions that you're used to in MySQL are supported in SQLite. So it's always wise to consult the documentation if you're not sure whether a certain function is supported or not. 

If you want to learn about other options for storing data in NativeScript apps, I recommend you read this article on Going Offline With NativeScript.

In the final post of this series, we'll add push notifications to our app.

In the meantime, check out some of our other posts on NativeScript and cross-platform mobile coding.

For a comprehensive introduction to NativeScript, try our video course Code a Mobile App With NativeScript. In this course, Keyvan Kasaei will show you step by step how to build a simple application. Along the way, you'll learn how to implement a simple app workflow with network requests, an MVVM architecture, and some of the most important NativeScript UI components. By the end, you'll understand why you should consider NativeScript for your next mobile app project.

 
2017-07-03T12:00:00.000Z2017-07-03T12:00:00.000ZWernher-Bel Ancheta

Android Things: Creating a Cloud-Connected Doorman

$
0
0

Android Things allows you to make amazing IoT devices with simple code. In this post, I'll show you how to put the pieces together to build a more complex project!

This won't be a complete top-to-bottom tutorial. I'll leave you lots of room to expand and customize your device and app—so you can explore and learn further on your own. My goal is to have fun while working with this new development platform, and show you that there's more to Android Things than just blinking LEDs.

What Are We Building?

Half the fun of an Internet of Things project is coming up with the "thing". For this article, I'll build a cloud-connected doorbell, which will take a picture when someone approaches, upload that image to Firebase, and trigger an action. Our project will require a few components before we can start:

  • Raspberry Pi 3B with Android Things on a SIM card
  • Raspberry Pi camera
  • Motion detector (component: HCSR501)

In addition, you can customize your project to fit your own creative style and have some fun with it. For my project, I took a skeleton decoration that had been sitting on my porch since Halloween and used that as a casing for my project—with the eyes drilled out to hold the camera and motion detector. 

Skeleton Android Things device

I also added a servomotor to move the jaw, which is held closed with a piece of elastic, and a USB speaker to support text-to-speech capabilities. 

Schematics for a simple customized smart doorbell

You can start this project by building your circuit. Be sure to note what pin you use for your motion detector and how you connect any additional peripherals—for example, the connection of the camera module to the camera slot on your Raspberry Pi. With some customization, everyone's end product will be a little different, and you can share your own finished IoT project in the comments section for this article. For information on hooking up a circuit, see my tutorial on creating your first project.

Detecting Motion

There are two major components that we will use for this project: the camera and the motion detector. We'll start by looking at the motion detector. This will require a new class that handles reading digital signals from our GPIO pin. When motion is detected, a callback will be triggered that we can listen for on our MainActivity. For more information on GPIO, see my article on Android Things peripherals.

If you have been following along with the Android Things series on Envato Tuts+, you may want to try writing the complete motion detector class on your own, as it is a simple digital input component. If you'd rather skip ahead, you can find the entire component written in the project for this tutorial.

In your Activity you can instantiate your HCSR501 component and associate a new HCSR501.OnMotionDetectedEventListener with it.

Once your motion detector is working, it's time to take a picture with the Raspberry Pi camera.

Taking a Picture

One of the best ways to learn a new tool or platform quickly is to go through the sample code provided by the creators. In this case, we will use a class created by Google for taking a picture using the Camera2 API. If you want to learn more about the Camera2 API, you can check out our complete video course here at Envato Tuts+

You can find all of the source code for the camera class in this project's sample, though the main method that you will be interested in is takePicture(). This method will take an image and return it to a callback in your application. 

Once this class has been added to your project, you will need to add the ImageReader.OnImageAvailableListener interface to your Activity, initialize the camera from onCreate(), and listen for any returned results. When your results are returned in onImageAvailable(), you will need to convert them to a byte array for uploading to Firebase.

Uploading a Picture

Now that you have your image data, it's time to upload it to Firebase. While I won't go into detail on setting up Firebase for your app, you can follow along with this tutorial to get up and running. We will be using Firebase Storage to store our images, though once your app is set up for using Firebase, you can do additional tasks such as storing data in the Firebase database for use with a companion app that notifies you when someone is at your door. Let's update the onPictureTaken() method to upload our image.

Once your images are uploaded, you should be able to see them in Firebase Storage.

Images stored in Firebase Storage

Customize

Now that you have what you need to build the base functionality for your doorbell, it's time to really make this project yours. Earlier I mentioned that I did some customizing by using a skeleton with a moving jaw and text-to-speech capabilities. Servos can be implemented by importing the servo library from Google and including the following code in your MainActivity to set up and run the servo.

When you are done with your app, you will also need to dereference the servo motor.

Surprisingly, text to speech is a little more straightforward. You just need to initialize the text-to-speech engine, like so:

You can play with the settings to make the voice fit your application. In the sample above, I have set the voice to have a low, somewhat robotic pitch and an English accent. When you are ready to have your device say something, you can call speak() on the text-to-speech engine.

On a Raspberry Pi, if you are using a servo motor, you will need to ensure that your speaker is connected over a USB port, as the analog aux connection cannot be used while a PWM signal is also being created by your device. 

I highly recommend looking over Google's sample drivers to see what additional hardware you could add to your project, and get creative with your project. Most features that are available for Android are also supported in Android Things, including support for Google Play Services and TensorFlow machine learning. For a little inspiration, here's a video of the completed project:

 

Conclusion

In this article I've introduced a few new tools that you can use for building more complex IoT apps. 

This is the end of our series on Android Things, so I hope you've learned a lot about the platform, and use it to build some amazing projects. Creating apps is one thing, but being able to affect the world around you with your apps is even more exciting. Be creative, create wonderful things, and above all else, have fun! 

Remember that Envato Tuts+ is filled with information on Android development, and you can find lots of inspiration here for your next app or IoT project.

2017-07-04T15:00:00.000Z2017-07-04T15:00:00.000ZPaul Trebilcox-Ruiz

Android Things: Creating a Cloud-Connected Doorman

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-28268

Android Things allows you to make amazing IoT devices with simple code. In this post, I'll show you how to put the pieces together to build a more complex project!

This won't be a complete top-to-bottom tutorial. I'll leave you lots of room to expand and customize your device and app—so you can explore and learn further on your own. My goal is to have fun while working with this new development platform, and show you that there's more to Android Things than just blinking LEDs.

What Are We Building?

Half the fun of an Internet of Things project is coming up with the "thing". For this article, I'll build a cloud-connected doorbell, which will take a picture when someone approaches, upload that image to Firebase, and trigger an action. Our project will require a few components before we can start:

  • Raspberry Pi 3B with Android Things on a SIM card
  • Raspberry Pi camera
  • Motion detector (component: HCSR501)

In addition, you can customize your project to fit your own creative style and have some fun with it. For my project, I took a skeleton decoration that had been sitting on my porch since Halloween and used that as a casing for my project—with the eyes drilled out to hold the camera and motion detector. 

Skeleton Android Things device

I also added a servomotor to move the jaw, which is held closed with a piece of elastic, and a USB speaker to support text-to-speech capabilities. 

Schematics for a simple customized smart doorbell

You can start this project by building your circuit. Be sure to note what pin you use for your motion detector and how you connect any additional peripherals—for example, the connection of the camera module to the camera slot on your Raspberry Pi. With some customization, everyone's end product will be a little different, and you can share your own finished IoT project in the comments section for this article. For information on hooking up a circuit, see my tutorial on creating your first project.

Detecting Motion

There are two major components that we will use for this project: the camera and the motion detector. We'll start by looking at the motion detector. This will require a new class that handles reading digital signals from our GPIO pin. When motion is detected, a callback will be triggered that we can listen for on our MainActivity. For more information on GPIO, see my article on Android Things peripherals.

If you have been following along with the Android Things series on Envato Tuts+, you may want to try writing the complete motion detector class on your own, as it is a simple digital input component. If you'd rather skip ahead, you can find the entire component written in the project for this tutorial.

In your Activity you can instantiate your HCSR501 component and associate a new HCSR501.OnMotionDetectedEventListener with it.

Once your motion detector is working, it's time to take a picture with the Raspberry Pi camera.

Taking a Picture

One of the best ways to learn a new tool or platform quickly is to go through the sample code provided by the creators. In this case, we will use a class created by Google for taking a picture using the Camera2 API. If you want to learn more about the Camera2 API, you can check out our complete video course here at Envato Tuts+

You can find all of the source code for the camera class in this project's sample, though the main method that you will be interested in is takePicture(). This method will take an image and return it to a callback in your application. 

Once this class has been added to your project, you will need to add the ImageReader.OnImageAvailableListener interface to your Activity, initialize the camera from onCreate(), and listen for any returned results. When your results are returned in onImageAvailable(), you will need to convert them to a byte array for uploading to Firebase.

Uploading a Picture

Now that you have your image data, it's time to upload it to Firebase. While I won't go into detail on setting up Firebase for your app, you can follow along with this tutorial to get up and running. We will be using Firebase Storage to store our images, though once your app is set up for using Firebase, you can do additional tasks such as storing data in the Firebase database for use with a companion app that notifies you when someone is at your door. Let's update the onPictureTaken() method to upload our image.

Once your images are uploaded, you should be able to see them in Firebase Storage.

Images stored in Firebase Storage

Customize

Now that you have what you need to build the base functionality for your doorbell, it's time to really make this project yours. Earlier I mentioned that I did some customizing by using a skeleton with a moving jaw and text-to-speech capabilities. Servos can be implemented by importing the servo library from Google and including the following code in your MainActivity to set up and run the servo.

When you are done with your app, you will also need to dereference the servo motor.

Surprisingly, text to speech is a little more straightforward. You just need to initialize the text-to-speech engine, like so:

You can play with the settings to make the voice fit your application. In the sample above, I have set the voice to have a low, somewhat robotic pitch and an English accent. When you are ready to have your device say something, you can call speak() on the text-to-speech engine.

On a Raspberry Pi, if you are using a servo motor, you will need to ensure that your speaker is connected over a USB port, as the analog aux connection cannot be used while a PWM signal is also being created by your device. 

I highly recommend looking over Google's sample drivers to see what additional hardware you could add to your project, and get creative with your project. Most features that are available for Android are also supported in Android Things, including support for Google Play Services and TensorFlow machine learning. For a little inspiration, here's a video of the completed project:

 

Conclusion

In this article I've introduced a few new tools that you can use for building more complex IoT apps. 

This is the end of our series on Android Things, so I hope you've learned a lot about the platform, and use it to build some amazing projects. Creating apps is one thing, but being able to affect the world around you with your apps is even more exciting. Be creative, create wonderful things, and above all else, have fun! 

Remember that Envato Tuts+ is filled with information on Android development, and you can find lots of inspiration here for your next app or IoT project.

2017-07-04T15:00:00.000Z2017-07-04T15:00:00.000ZPaul Trebilcox-Ruiz

New Course: Easy Mobile Apps With Ionic Creator

$
0
0
Final product image
What You'll Be Creating

How would you like to build mobile apps with an easy drag-and-drop interface? In our new short course, Easy Mobile Apps With Ionic Creator, you'll learn exactly how to do that.

What You’ll Learn

In this course, you'll learn how to use Ionic Creator to build cross-platform mobile apps for the popular Ionic framework. First, you'll get a look at all the UI components you can use with Ionic Creator. After that, you'll see how to theme your app with Sass. Finally, you'll get a look at building a complete app user interface that can later be downloaded and customized.

Ionic Creator UI components

This is a simple, 38-minute course that introduces you to a fun and useful new subject. Instructor Reggie Dawson takes you through the whole process in easy-to-follow videos. Watch his introduction below to learn more.

Watch the Introduction

 

Take the Course

You can take our new course straight away with a subscription to Envato Elements. For a single low monthly fee, you get access not only to this course, but also to our growing library of over 1,000 video courses on Tuts+. 

Plus you now get to make unlimited downloads from the huge Envato Elements library of 200,000+ photos and 26,000+ design assets and templates. So no matter what kinds of projects you work on, you're sure to get good value from the subscription.

If you're looking for a quick and easy alternative, try Ionic Mobile App Creator on Envato Market, a web tool that makes it easy to build for the Ionic framework without coding. 

2017-07-04T16:38:43.000Z2017-07-04T16:38:43.000ZAndrew Blackman

New Course: Easy Mobile Apps With Ionic Creator

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-29129
Final product image
What You'll Be Creating

How would you like to build mobile apps with an easy drag-and-drop interface? In our new short course, Easy Mobile Apps With Ionic Creator, you'll learn exactly how to do that.

What You’ll Learn

In this course, you'll learn how to use Ionic Creator to build cross-platform mobile apps for the popular Ionic framework. First, you'll get a look at all the UI components you can use with Ionic Creator. After that, you'll see how to theme your app with Sass. Finally, you'll get a look at building a complete app user interface that can later be downloaded and customized.

Ionic Creator UI components

This is a simple, 38-minute course that introduces you to a fun and useful new subject. Instructor Reggie Dawson takes you through the whole process in easy-to-follow videos. Watch his introduction below to learn more.

Watch the Introduction

 

Take the Course

You can take our new course straight away with a subscription to Envato Elements. For a single low monthly fee, you get access not only to this course, but also to our growing library of over 1,000 video courses and industry-leading eBooks on Envato Tuts+.

Plus you now get unlimited downloads from the huge Envato Elements library of 200,000+ photos and 26,000+ design assets and templates. Create with unique fonts, photos, graphics and templates, and deliver better projects faster.

If you're looking for a quick and easy alternative, try Ionic Mobile App Creator on Envato Market, a web tool that makes it easy to build for the Ionic framework without coding. 

2017-07-04T16:38:43.000Z2017-07-04T16:38:43.000ZAndrew Blackman
Viewing all 1836 articles
Browse latest View live