Quantcast
Channel: Envato Tuts+ Code - Mobile Development
Viewing all 1836 articles
Browse latest View live

Ensure High-Quality Android Code With Static Analysis Tools

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-28787

In today's tutorial, we'll learn about how to ensure high-quality Android code in our projects using some static code analysis tools for Java. We'll look at Checkstyle, FindBugs, PMD, and Android Studio Lint—all of them free and open source!

What Are Static Code Analysis Tools?

These are tools that parse and analyse your source code without actually executing it. The goal is to find potential vulnerabilities such as bugs and security flaws. A popular free static code analyser such as FindBugs checks your code against a set of rules which your code should adhere to—if the code doesn't follow these rules, it's a sign that something may be wrong. Think of static code analysis tools as an additional compiler that is run before the final compilation into the system language.  

Many software companies are requiring projects to pass static code analysis tests, in addition to doing code reviews and unit testing in the build process. Even maintainers of open-source projects often include one or more static code analysis steps in the build process. So learning about static analysis is an important step in writing quality code. Be aware that static code analysis—also known as "white-box" testing—should not be seen as a replacement for unit testing of your source code.

In this tutorial, we're going to learn about some popular static analysis tools that are available for Android and Java. But first, let's see some of the benefits of using static analysis.

Benefits

  • Helps detect potential bugs that even unit or manual testing might have missed.
  • Defines project-specific rules. For example, static analysis as part of the build chain helps newcomers get up to speed with the code standards of their new team.
  • Helps you improve your knowledge of a new language.
  • Scans your whole project, including files that you might not have ever read.

Setup

All the code analysis tools we'll learn about in this tutorial are available as Gradle plugins, so we can create individual Gradle tasks for each of them. Let's use a single Gradle file that will include them all. But before that, let's create a folder that will contain all of our files for the static code analysis. 

Open Android Studio and inside the app module (in Project view), create a new folder and name it code_quality_tools. This folder will contain the XML files for the code analysis tools, and it will also have a Gradle file, quality.gradle, which will run our static analysis tasks. 

Android studio project structure screenshot

Finally, visit your build.gradle in the app module folder and include this line at the end of the file:

Here, our quality.gradle Gradle script is being applied with a reference to its local file location. 

Checkstyle

Given rules you specify in an XML file to enforce a coding standard for your project, Checkstyle enforces those rules by analysing your source code and compares them against known coding standards or conventions. 

Checkstyle is an open-source tool that is actively maintained by the community. This means you can create your own custom checks or modify existing ones to suit your needs. For example, Checkstyle can run a check on the constant names (final, static, or both) in your classes. If your constant names do not stick to a rule of being in uppercase with words separated by an underscore, the problem will be flagged in the final report. 

Integrating Checkstyle

I'll show you how to integrate Checkstyle into our Android Studio project and demonstrate a practical example.

First, we need to create our coding rules. Inside checkstyle.xml, we create some Checkstyle configuration rules that will be run against our code.

In the above code, we include the rules or checks we want Checkstyle to validate in our source code. One rule is AvoidStarImport which, as the name says, checks if your source code included an import statement like java.util.*. (Instead, you should explicitly specify the package to import, e.g. java.util.Observable.) 

Some rules have properties, which we can set just like we did for ParameterNumber—this limits the number of parameters of a method or constructor. By default, the property max is 7, but we changed it to 6 instead. Take a look at some of the other checks on the Checkstyle website.

To run this check, we need to create a Gradle task. So visit the quality.gradle file and create a task called checkstyle:

Notice that in the code above, we first applied the Checkstyle Gradle plugin. We gave it a description and added it to an already predefined Gradle group called verification. 

The key properties of the Checkstyle Gradle task we are concerned with are: 

  • configFile: the Checkstyle configuration file to use.
  • IgnoreFailures: whether or not to allow the build to continue if there are warnings.
  • include: the set of include patterns.
  • exclude: the set of exclude patterns. In this case, we don't scan generated classes. 

Finally, you can run the Gradle script by visiting the Gradle tool window on Android Studio, opening the verification group, and then clicking on checkstyle to run the task. 

Gradle toolbar open to run checkstyle task

Another way is to use the command line: 

After the task has finished running, a report will be generated, which is available at app module > build > reports > checkstyle. You can open checkstyle.html to view the report. 

Android Studio Checkstyle report folder location

Checkstyle plugin is freely available for Android Studio or IntelliJ IDEA. It offers real-time scanning of your Java files. 

PMD

PMD is another open-source code analysis tool that analyzes your source code. It finds common flaws like unused variables, empty catch blocks, unnecessary object creation and so on. PMD has many rule sets you can choose from. An example of a rule which is part of the Design Rules set is:

  • SimplifyBooleanExpressions: avoid unnecessary comparisons in boolean expressions which complicate simple code. An example: 

PMD is configured with the pmd.xml file. Inside it, we'll include some configuration rules such as the ones for Android, Naming, and Design

As we did for Checkstyle, we also need to create a PMD Gradle task for the check to be executed inside the quality.gradle file. 

PMD is also available as a Gradle plugin

The key properties of the task we've created are: 

  • ruleSetFiles: The custom rule set files to be used.
  • source: The source for this task.
  • reports: The reports to be generated by this task.

Finally, you can run the Gradle script by visiting the Gradle tool window, opening the verification group folder, and then clicking on pmd to run the task. Or you can run it via the command line:

A report will also be generated after the execution of the task which is available at app module > build > reports > pmd. There is also a PMD plugin available for IntelliJ or Android Studio for you to download and integrate if you want. 

FindBugs

FindBugs is another free static analysis tool which analyses your class looking for potential problems by checking your bytecodes against a known list of bug patterns. Some of them are:

  • Class defines hashCode() but not equals(): A class implements the hashCode() method but not equals()—therefore two instances might be equal but not have the same hash codes. This falls under the bad practice category. 
  • Bad comparison of int value with long constant: The code is comparing an int value with a long constant that is outside the range of values that can be represented as an int value. This comparison is vacuous and possibly will yield an unexpected result. This falls under the correctness category. 
  • TestCase has no tests: class is a JUnit TestCase but has not implemented any test methods. This pattern is also under the correctness category. 

FindBugs is an open-source project, so you can view, contribute or monitor the progress of the source code on GitHub

In the findbugs-exclude.xml file, we want to prevent FindBugs from scanning some classes (using regular expressions) in our projects, such as auto-generated resource classes and auto-generated manifest classes. Also, if you use Dagger, we want FindBugs not to check the generated Dagger classes. We can also tell FindBugs to ignore some rules if we want. 

And finally, we'll include the findbugs task in quality.gradle:

In the first line above, we applied FindBugs as a Gradle Plugin and then created a task called findbugs. The key properties of the findbugs task we are really concerned with are: 

  • classes: the classes to be analyzed.
  • effort: the analysis effort level. The value specified should be one of mindefault, or max.  Be aware that higher levels increase precision and find more bugs at the cost of running time and memory consumption.
  • reportLevel: the priority threshold for reporting bugs. If set to low, all bugs are reported. If set to medium (the default), medium and high priority bugs are reported. If set to high, only high priority bugs are reported.
  • excludeFilter: the filename of a filter specifying bugs to exclude from being reported, which we have created already. 

You can then run the Gradle script by visiting the Gradle tool window, opening the verification group folder, and then clicking on findbugs to run the task. Or launch it from the command line:

A report will also be generated when the task has finished executing. This will be available at app module > build > reports > findbugs. The FindBugs plugin is another freely available plugin for download and integration with either IntelliJ IDEA or Android Studio.

Android Lint

Lint is another code analysis tool, but this one comes with Android Studio by default. It checks your Android project source files for potential bugs and optimizations for correctness, security, performance, usability, accessibility, and internationalization. 

To configure Lint, you have to include the lintOptions {} block in your module-level build.gradle file:

The key Lint options we are concerned with are: 

  • abortOnError: whether lint should set the exit code of the process if errors are found.
  • quiet: whether to turn off analysis progress reporting.
  • lintConfig: the default configuration file to use.

Your lint.xml file can include issues you want Lint to ignore or modify, such as the example below:

You can run Lint manually from Android Studio by clicking on the Analyze menu, choosing Inspect Code... (the inspection scope is the whole project), and then clicking on the OK button to proceed.

Android studio inspect code menu
Android Studio lint inspect whole project code dialog

You can also run Lint by visiting the Gradle tool window, opening the verification group, and then clicking on lint. Finally, you can run it via the command line.

On Windows:

On Linux or Mac:

A report will also be generated when the task has finished executing, which is available at app module > build > outputs > lint-results.html.

Bonus: StrictMode

StrictMode is a developer tool that helps prevent developers of your project doing any accidental flash I/O or network I/O on the main thread, because this can lead to the app being sluggish or unresponsive. It also helps in preventing ANR (App Not Responding) dialogs from showing up. With StrictMode issues corrected, your app will become more responsive and the user will enjoy a smoother experience. StrictMode uses two sets of policies to enforce its rules:

  • VM Policies: guards against bad coding practices such as not closing SQLiteCursor objects or any Closeable object that was created. 
  • Thread Policies: looks out for operations such as flash I/O and network I/O being performed on the main application thread instead of on a background thread. 

The code above can be either in your Application, Activity, or other application component's onCreate() method. 

You can learn more about StrictMode here on Envato Tuts+. 

A sample Android project implementing all of the above including rule sets of the tools for a typical Android project can be found in this post's GitHub repo.

Conclusion

In this tutorial, you learned about how to ensure high-quality Android code using static code analysis tools: what they are, benefits of using them, and how to use Checkstyle, FindBugs, Lint, PMD, and StrictMode in your application. Go ahead and give these tools a try—you might discover some problems in your code that you never expected.

In the meantime, check out some of our other courses and tutorials on Android app development!

2017-05-30T18:59:57.000Z2017-05-30T18:59:57.000ZChike Mgbemena

Google I/O 2017 Aftermath: Building Lifecycle-Aware Components

$
0
0

As usual, this year’s Google I/O saw plenty of Android-related announcements.

In this series of quick tips, we’re going to take a closer look at some of the software updates and new releases you can get your hands on today

In this first post, we’re going to look at a collection of libraries that aims to take the pain out of lifecycle management, by giving you a way to build lifecycle-aware components that can track and react to lifecycle events automatically. I’ll also be providing a brief introduction to two other components that have been designed to use with these new lifecycle-aware components: LiveData and Room.

LifecycleOwner and LifecycleObserver

Respecting the lifecycle of your Activitys and Fragments is crucial to creating a successful app. Get these fundamentals wrong, and you’re going to wind up with memory leaks that cause your app to lag, and potentially even crash.

Another recurring problem you may encounter with lifecycle management is attempting to update your app’s UI when the activity or fragment isn’t in a valid state. For example, if an Activity receives a callback after it’s been stopped, then it’s pretty likely that your app is going to crash. 

To help you avoid all the headaches that come with lifecycle management, Google has announced a new set of lifecycle-aware components that can track the lifecycle of an activity or fragment, and adjust their behaviour accordingly.

You can access these Android Architecture Components via Google’s Maven repository today. However, they are still in alpha, so you should expect some breaking changes before the 1.0 release. 

In particular, the Fragment and AppCompatActivity classes currently cannot implement the new LifecycleOwner interface. You'll need to use the temporary LifecycleActivity and LifecycleFragment classes until the Android Architecture Components reach their 1.0 release. These classes will be deprecated as soon as Android’s fragments and Activities have been updated to support the lifecycle components.

To start experimenting with these components, you’ll need to add the Google Maven repository to your project-level build.gradle file:

Then, open your module-level build.gradle file, and add the following:

There are a few Android Architecture Components, but in this article we’re just going to focus on the following two:

  • LifecycleOwnerThis is something that has a lifecycle, such as an Activity or Fragment.
  • LifecycleObserverThis is a class that can monitor a component's lifecycle status via annotated methods. These methods are called whenever the associated component enters the corresponding lifecycle state.

By moving the code that monitors and reacts to lifecycle events into a separate LifecycleObserver, you can prevent your activity or fragment’s lifecycle-related methods (such as onStart and onStop) from ballooning out of control, making your code much more human-readable.

In the following example, we’re implementing LifecycleObserver, and then using the @OnLifeCycleEvent to react to various lifecycle events:

Then, in the Activity you want to monitor, extend LifecycleActivity to get access to the LifecycleObserver information:

Many operations can only be performed when a fragment or activity is in a specific state. You can use lifecycle.getState to quickly and easily check the component’s current state, and then only perform the action if the component is in the correct state:

LiveData

LiveData is an observable data holder that exposes a stream of events that you can observe.

The key difference between LiveData and other observables, such as RxJava, is that LiveData is aware of the Android lifecycle. LiveData respects the lifecycle state of your Activities, fragments, and services, and will manage subscriptions for you.

Crucially, if an observer’s lifecycle is inactive, then the observer won’t be notified about changes to the LiveData, preventing application crashes that can occur when you try to push updates to stopped components.

To use LiveData, you just need to tell your Activity that you want to observe some data within the lifecycle:

As soon as the activity starts, it’ll begin observing the LiveData, and your observer will receive an update whenever the value of that LiveData changes. If the Activity is destroyed, then the subscription will be removed automatically.

If an Activity is stopped due to a configuration change, then the new instance of that Activity will receive the last available value from the LiveData.

LiveData does share some similarities with RxJava, but the official word from Google I/O is that if you’re familiar with RxJava, then you should start your Android projects with LiveData, as it’s designed to be simple, fast and lightweight, and integrates well with the Android framework. You can then add RxJava features if you need additional reactive programming functionality.

If you do want to use LiveData with the RxJava 2 library, then open your module-level build.gradle file and add the following:

You’ll then be able to use the following methods: 

  • toPublisherAdapts the LiveData stream to a ReactiveStreams Publisher

  • fromPublisherCreates an observable LiveData stream from a ReactiveStreams publisher. 

The Room Library

Although the Android framework has built-in support for working with raw SQL content, these APIs are fairly low-level and time-consuming to implement.

Google’s new Room library promises to abstract away some of the underlying implementation details of working with raw SQL tables and queries. It should also help reduce the amount of boilerplate code you need to write in order to convert SQL queries into Java data objects, and it features a Migration class that you can use to update your app without losing the user’s data. 

To use Room, open your module-level build.gradle file and add the following to the dependencies section:

When performing queries, you'll typically want your UI to update automatically whenever the data changes; with Room, you can achieve this by using a return value type of LiveData.

Finally, if you’re using RxJava, then your Room queries can also return RxJava 2’s Publisher and Flowable objects. To use RxJava with Room, you’ll need to open your module-level build.gradle file and add the following to the dependencies section:

Conclusion

In this quick tip, I showed you how to manage the Android lifecycle, using LifecycleOwner and LifecycleObserver, and introduced you to two additional components you may want to use alongside the Lifecycle project. 

In the next tip, we’re going to look at Google’s plans to merge Android Wear UI components with the Android Support Library, as well as some additions to Android Wear complications. 

In the meantime, check out some of our other tutorials and our video courses on Android app development!

2017-06-01T10:00:00.000Z2017-06-01T10:00:00.000ZJessica Thornsby

Google I/O 2017 Aftermath: Building Lifecycle-Aware Components

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-28930

As usual, this year’s Google I/O saw plenty of Android-related announcements.

In this series of quick tips, we’re going to take a closer look at some of the software updates and new releases you can get your hands on today

In this first post, we’re going to look at a collection of libraries that aims to take the pain out of lifecycle management, by giving you a way to build lifecycle-aware components that can track and react to lifecycle events automatically. I’ll also be providing a brief introduction to two other components that have been designed to use with these new lifecycle-aware components: LiveData and Room.

LifecycleOwner and LifecycleObserver

Respecting the lifecycle of your Activitys and Fragments is crucial to creating a successful app. Get these fundamentals wrong, and you’re going to wind up with memory leaks that cause your app to lag, and potentially even crash.

Another recurring problem you may encounter with lifecycle management is attempting to update your app’s UI when the activity or fragment isn’t in a valid state. For example, if an Activity receives a callback after it’s been stopped, then it’s pretty likely that your app is going to crash. 

To help you avoid all the headaches that come with lifecycle management, Google has announced a new set of lifecycle-aware components that can track the lifecycle of an activity or fragment, and adjust their behaviour accordingly.

You can access these Android Architecture Components via Google’s Maven repository today. However, they are still in alpha, so you should expect some breaking changes before the 1.0 release. 

In particular, the Fragment and AppCompatActivity classes currently cannot implement the new LifecycleOwner interface. You'll need to use the temporary LifecycleActivity and LifecycleFragment classes until the Android Architecture Components reach their 1.0 release. These classes will be deprecated as soon as Android’s fragments and Activities have been updated to support the lifecycle components.

To start experimenting with these components, you’ll need to add the Google Maven repository to your project-level build.gradle file:

Then, open your module-level build.gradle file, and add the following:

There are a few Android Architecture Components, but in this article we’re just going to focus on the following two:

  • LifecycleOwnerThis is something that has a lifecycle, such as an Activity or Fragment.
  • LifecycleObserverThis is a class that can monitor a component's lifecycle status via annotated methods. These methods are called whenever the associated component enters the corresponding lifecycle state.

By moving the code that monitors and reacts to lifecycle events into a separate LifecycleObserver, you can prevent your activity or fragment’s lifecycle-related methods (such as onStart and onStop) from ballooning out of control, making your code much more human-readable.

In the following example, we’re implementing LifecycleObserver, and then using the @OnLifeCycleEvent to react to various lifecycle events:

Then, in the Activity you want to monitor, extend LifecycleActivity to get access to the LifecycleObserver information:

Many operations can only be performed when a fragment or activity is in a specific state. You can use lifecycle.getState to quickly and easily check the component’s current state, and then only perform the action if the component is in the correct state:

LiveData

LiveData is an observable data holder that exposes a stream of events that you can observe.

The key difference between LiveData and other observables, such as RxJava, is that LiveData is aware of the Android lifecycle. LiveData respects the lifecycle state of your Activities, fragments, and services, and will manage subscriptions for you.

Crucially, if an observer’s lifecycle is inactive, then the observer won’t be notified about changes to the LiveData, preventing application crashes that can occur when you try to push updates to stopped components.

To use LiveData, you just need to tell your Activity that you want to observe some data within the lifecycle:

As soon as the activity starts, it’ll begin observing the LiveData, and your observer will receive an update whenever the value of that LiveData changes. If the Activity is destroyed, then the subscription will be removed automatically.

If an Activity is stopped due to a configuration change, then the new instance of that Activity will receive the last available value from the LiveData.

LiveData does share some similarities with RxJava, but the official word from Google I/O is that if you’re familiar with RxJava, then you should start your Android projects with LiveData, as it’s designed to be simple, fast and lightweight, and integrates well with the Android framework. You can then add RxJava features if you need additional reactive programming functionality.

If you do want to use LiveData with the RxJava 2 library, then open your module-level build.gradle file and add the following:

You’ll then be able to use the following methods: 

  • toPublisherAdapts the LiveData stream to a ReactiveStreams Publisher

  • fromPublisherCreates an observable LiveData stream from a ReactiveStreams publisher. 

The Room Library

Although the Android framework has built-in support for working with raw SQL content, these APIs are fairly low-level and time-consuming to implement.

Google’s new Room library promises to abstract away some of the underlying implementation details of working with raw SQL tables and queries. It should also help reduce the amount of boilerplate code you need to write in order to convert SQL queries into Java data objects, and it features a Migration class that you can use to update your app without losing the user’s data. 

To use Room, open your module-level build.gradle file and add the following to the dependencies section:

When performing queries, you'll typically want your UI to update automatically whenever the data changes; with Room, you can achieve this by using a return value type of LiveData.

Finally, if you’re using RxJava, then your Room queries can also return RxJava 2’s Publisher and Flowable objects. To use RxJava with Room, you’ll need to open your module-level build.gradle file and add the following to the dependencies section:

Conclusion

In this quick tip, I showed you how to manage the Android lifecycle, using LifecycleOwner and LifecycleObserver, and introduced you to two additional components you may want to use alongside the Lifecycle project. 

In the next tip, we’re going to look at Google’s plans to merge Android Wear UI components with the Android Support Library, as well as some additions to Android Wear complications. 

In the meantime, check out some of our other tutorials and our video courses on Android app development!

2017-06-01T10:00:00.000Z2017-06-01T10:00:00.000ZJessica Thornsby

Securing iOS Data at Rest: The Keychain

$
0
0

Any app that saves the user's data has to take care of the security and privacy of that data. As we've seen with recent data breaches, there can be very serious consequences for failing to protect your users' stored data. In this tutorial, you'll learn some best practices for protecting your users' data.

In the previous post, you learned how to protect files using the Data Protection API. File-based protection is a powerful feature for secure bulk data storage. But it might be overkill for a small amount of information to protect, such as a key or password. For these types of items, the keychain is the recommended solution.

Keychain Services

The keychain is a great place to store smaller amounts of information such as sensitive strings and IDs that persist even when the user deletes the app. An example might be a device or session token that your server returns to the app upon registration. Whether you call it a secret string or unique token, the keychain refers to all of these items as passwords

There are a few popular third-party libraries for keychain services, such as Strongbox (Swift) and SSKeychain (Objective-C). Or, if you want complete control over your own code, you may wish to directly use the Keychain Services API, which is a C API. 

I will briefly explain how the keychain works. You can think of the keychain as a typical database where you run queries on a table. The functions of the keychain API all require a CFDictionary object that contains attributes of the query. 

Each entry in the keychain has a service name. The service name is an identifier: a key for whatever value you want to store or retrieve in the keychain. To allow a keychain item to be stored for a specific user only, you'll also often want to specify an account name. 

Because each keychain function takes a similar dictionary with many of the same parameters to make a query, you can avoid duplicate code by making a helper function that returns this query dictionary.

This code sets up the query Dictionary with your account and service names and tells the keychain that we will be storing a password. 

Similarly to how you can set the protection level for individual files (as we discussed in the previous post), you can also set the protection levels for your keychain item using the kSecAttrAccessible key

Adding a Password

The SecItemAdd() function adds data to the keychain. This function takes a Data object, which makes it versatile for storing many kinds of objects. Using the password query function we created above, let's store a string in the keychain. To do this, we just have to convert the String to Data.

Deleting a Password

To prevent duplicate inserts, the code above first deletes the previous entry if there is one. Let's write that function now. This is accomplished using the SecItemDelete() function.

Retrieving a Password

Next, to retrieve an entry from the keychain, use the SecItemCopyMatching() function. It will return an AnyObject that matches your query.

In this code, we set the kSecReturnData parameter to kCFBooleanTruekSecReturnData means the actual data of the item will be returned. A different option could be to return the attributes (kSecReturnAttributes) of the item. The key takes a CFBoolean type which holds the constants kCFBooleanTrue or kCFBooleanFalse. We are setting kSecMatchLimit to kSecMatchLimitOne so that only the first item found in the keychain will be returned, as opposed to an unlimited number of results.

Public and Private Keys

The keychain is also the recommended place to store public and private key objects, for example, if your app works with and needs to store EC or RSA SecKey objects. 

The main difference is that instead of telling the keychain to store a password, we can tell it to store a key. In fact, we can get specific by setting the types of keys stored, such as whether it is public or private. All that needs to be done is to adapt the query helper function to work with the type of key you want. 

Keys generally are identified using a reverse domain tag such as com.mydomain.mykey instead of service and account names (since public keys are openly shared between different companies or entities). We will take the service and account strings and convert them to a tag Data object. For example, the above code adapted to store an RSA Private SecKey would look like this:

Application Passwords

Items secured with the kSecAttrAccessibleWhenUnlocked flag are only unlocked when the device is unlocked, but it relies on the user having a passcode or Touch ID set up in the first place. 

The applicationPassword credential allows items in the keychain to be secured using an additional password. This way, if the user does not have a passcode or Touch ID set up, the items will still be secure, and it adds an extra layer of security if they do have a passcode set.  

As an example scenario, after your app authenticates with your server, your server could return the password over HTTPS that is required to unlock the keychain item. This is the preferred way of supplying that additional password. Hardcoding a password in the binary is not recommended.

Another scenario might be to retrieve the additional password from a user-provided password in your app; however, this requires more work to secure properly (using PBKDF2). We will look at securing user-provided passwords in the next tutorial. 

Another use of an application password is for storing a sensitive key—for example, one that you would not want to be exposed just because the user had not yet set up a passcode. 

applicationPassword is only available on iOS 9 and above, so you will need a fallback that doesn't use applicationPassword if you are targeting lower iOS versions. To use the code, you will need to add the following into your bridging header:

The following code sets a password for the query Dictionary.

Notice that we set kSecAttrAccessControl on the Dictionary. This is used in place of kSecAttrAccessible, which was previously set in our passwordQuery method. If you try to use both, you'll get an OSStatus -50 error.

User Authentication

Starting in iOS 8, you can store data in the keychain that can only be accessed after the user successfully authenticates on the device with Touch ID or a passcode. When it's time for the user to authenticate, Touch ID will take priority if it is set up, otherwise the passcode screen is presented. Saving to the keychain will not require the user to authenticate, but retrieving the data will. 

You can set a keychain item to require user authentication by providing an access control object set to .userPresence. If no passcode is set up then any keychain requests with .userPresence will fail. 

This feature is good when you want to make sure that your app is being used by the right person. For example, it would be important for the user to authenticate before being able to log in to a banking app. This will protect users who have left their device unlocked, so that the banking cannot be accessed. 

Also, if you do not have a server-side component to your app, you can use this feature to perform device-side authentication instead.

For the load query, you can provide a description of why the user needs to authenticate.

When retrieving the data with SecItemCopyMatching(), the function will show the authentication UI and wait for the user to use Touch ID or enter the passcode. Since SecItemCopyMatching() will block until the user has finished authenticating, you will need to call the function from a background thread in order to allow the main UI thread to stay responsive.

Again, we are setting kSecAttrAccessControl on the query Dictionary. You will need to remove kSecAttrAccessible, which was previously set in our passwordQuery method. Using both at once will result in an OSStatus -50 error.

Conclusion

In this article, you've had a tour of the Keychain Services API. Along with the Data Protection API that we saw in the previous post, use of this library is part of the best practices for securing data. 

However, if the user does not have a passcode or Touch ID on the device, there is no encryption for either framework. Because the Keychain Services and Data Protection APIs are commonly used by iOS apps, they are sometimes targeted by attackers, especially on jailbroken devices. If your app does not work with highly sensitive information, this may be an acceptable risk. While iOS is constantly updating the security of the frameworks, we are still at the mercy of the user updating the OS, using a strong passcode, and not jailbreaking their device. 

The keychain is meant for smaller pieces of data, and you may have a larger amount of data to secure that is independent of the device authentication. While iOS updates add some great new features such as the application password, you may still need to support lower iOS versions and still have strong security. For some of these reasons, you may instead want to encrypt the data yourself. 

The final article in this series covers encrypting the data yourself using AES encryption, and while it's a more advanced approach, this allows you to have full control over how and when your data is encrypted.

So stay tuned. And in the meantime, check out some of our other posts on iOS app development!

2017-06-02T12:00:00.000Z2017-06-02T12:00:00.000ZCollin Stuart

Securing iOS Data at Rest: The Keychain

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-28528

Any app that saves the user's data has to take care of the security and privacy of that data. As we've seen with recent data breaches, there can be very serious consequences for failing to protect your users' stored data. In this tutorial, you'll learn some best practices for protecting your users' data.

In the previous post, you learned how to protect files using the Data Protection API. File-based protection is a powerful feature for secure bulk data storage. But it might be overkill for a small amount of information to protect, such as a key or password. For these types of items, the keychain is the recommended solution.

Keychain Services

The keychain is a great place to store smaller amounts of information such as sensitive strings and IDs that persist even when the user deletes the app. An example might be a device or session token that your server returns to the app upon registration. Whether you call it a secret string or unique token, the keychain refers to all of these items as passwords

There are a few popular third-party libraries for keychain services, such as Strongbox (Swift) and SSKeychain (Objective-C). Or, if you want complete control over your own code, you may wish to directly use the Keychain Services API, which is a C API. 

I will briefly explain how the keychain works. You can think of the keychain as a typical database where you run queries on a table. The functions of the keychain API all require a CFDictionary object that contains attributes of the query. 

Each entry in the keychain has a service name. The service name is an identifier: a key for whatever value you want to store or retrieve in the keychain. To allow a keychain item to be stored for a specific user only, you'll also often want to specify an account name. 

Because each keychain function takes a similar dictionary with many of the same parameters to make a query, you can avoid duplicate code by making a helper function that returns this query dictionary.

This code sets up the query Dictionary with your account and service names and tells the keychain that we will be storing a password. 

Similarly to how you can set the protection level for individual files (as we discussed in the previous post), you can also set the protection levels for your keychain item using the kSecAttrAccessible key

Adding a Password

The SecItemAdd() function adds data to the keychain. This function takes a Data object, which makes it versatile for storing many kinds of objects. Using the password query function we created above, let's store a string in the keychain. To do this, we just have to convert the String to Data.

Deleting a Password

To prevent duplicate inserts, the code above first deletes the previous entry if there is one. Let's write that function now. This is accomplished using the SecItemDelete() function.

Retrieving a Password

Next, to retrieve an entry from the keychain, use the SecItemCopyMatching() function. It will return an AnyObject that matches your query.

In this code, we set the kSecReturnData parameter to kCFBooleanTruekSecReturnData means the actual data of the item will be returned. A different option could be to return the attributes (kSecReturnAttributes) of the item. The key takes a CFBoolean type which holds the constants kCFBooleanTrue or kCFBooleanFalse. We are setting kSecMatchLimit to kSecMatchLimitOne so that only the first item found in the keychain will be returned, as opposed to an unlimited number of results.

Public and Private Keys

The keychain is also the recommended place to store public and private key objects, for example, if your app works with and needs to store EC or RSA SecKey objects. 

The main difference is that instead of telling the keychain to store a password, we can tell it to store a key. In fact, we can get specific by setting the types of keys stored, such as whether it is public or private. All that needs to be done is to adapt the query helper function to work with the type of key you want. 

Keys generally are identified using a reverse domain tag such as com.mydomain.mykey instead of service and account names (since public keys are openly shared between different companies or entities). We will take the service and account strings and convert them to a tag Data object. For example, the above code adapted to store an RSA Private SecKey would look like this:

Application Passwords

Items secured with the kSecAttrAccessibleWhenUnlocked flag are only unlocked when the device is unlocked, but it relies on the user having a passcode or Touch ID set up in the first place. 

The applicationPassword credential allows items in the keychain to be secured using an additional password. This way, if the user does not have a passcode or Touch ID set up, the items will still be secure, and it adds an extra layer of security if they do have a passcode set.  

As an example scenario, after your app authenticates with your server, your server could return the password over HTTPS that is required to unlock the keychain item. This is the preferred way of supplying that additional password. Hardcoding a password in the binary is not recommended.

Another scenario might be to retrieve the additional password from a user-provided password in your app; however, this requires more work to secure properly (using PBKDF2). We will look at securing user-provided passwords in the next tutorial. 

Another use of an application password is for storing a sensitive key—for example, one that you would not want to be exposed just because the user had not yet set up a passcode. 

applicationPassword is only available on iOS 9 and above, so you will need a fallback that doesn't use applicationPassword if you are targeting lower iOS versions. To use the code, you will need to add the following into your bridging header:

The following code sets a password for the query Dictionary.

Notice that we set kSecAttrAccessControl on the Dictionary. This is used in place of kSecAttrAccessible, which was previously set in our passwordQuery method. If you try to use both, you'll get an OSStatus -50 error.

User Authentication

Starting in iOS 8, you can store data in the keychain that can only be accessed after the user successfully authenticates on the device with Touch ID or a passcode. When it's time for the user to authenticate, Touch ID will take priority if it is set up, otherwise the passcode screen is presented. Saving to the keychain will not require the user to authenticate, but retrieving the data will. 

You can set a keychain item to require user authentication by providing an access control object set to .userPresence. If no passcode is set up then any keychain requests with .userPresence will fail. 

This feature is good when you want to make sure that your app is being used by the right person. For example, it would be important for the user to authenticate before being able to log in to a banking app. This will protect users who have left their device unlocked, so that the banking cannot be accessed. 

Also, if you do not have a server-side component to your app, you can use this feature to perform device-side authentication instead.

For the load query, you can provide a description of why the user needs to authenticate.

When retrieving the data with SecItemCopyMatching(), the function will show the authentication UI and wait for the user to use Touch ID or enter the passcode. Since SecItemCopyMatching() will block until the user has finished authenticating, you will need to call the function from a background thread in order to allow the main UI thread to stay responsive.

Again, we are setting kSecAttrAccessControl on the query Dictionary. You will need to remove kSecAttrAccessible, which was previously set in our passwordQuery method. Using both at once will result in an OSStatus -50 error.

Conclusion

In this article, you've had a tour of the Keychain Services API. Along with the Data Protection API that we saw in the previous post, use of this library is part of the best practices for securing data. 

However, if the user does not have a passcode or Touch ID on the device, there is no encryption for either framework. Because the Keychain Services and Data Protection APIs are commonly used by iOS apps, they are sometimes targeted by attackers, especially on jailbroken devices. If your app does not work with highly sensitive information, this may be an acceptable risk. While iOS is constantly updating the security of the frameworks, we are still at the mercy of the user updating the OS, using a strong passcode, and not jailbreaking their device. 

The keychain is meant for smaller pieces of data, and you may have a larger amount of data to secure that is independent of the device authentication. While iOS updates add some great new features such as the application password, you may still need to support lower iOS versions and still have strong security. For some of these reasons, you may instead want to encrypt the data yourself. 

The final article in this series covers encrypting the data yourself using AES encryption, and while it's a more advanced approach, this allows you to have full control over how and when your data is encrypted.

So stay tuned. And in the meantime, check out some of our other posts on iOS app development!

2017-06-02T12:00:00.000Z2017-06-02T12:00:00.000ZCollin Stuart

Google I/O 2017 Aftermath: What's New for Android Wear?

$
0
0

In this series of tips, we’ve been taking a closer look at some of the new Android features and tools announced at this year’s Google I/O.

In this post, we’re going to be focusing on Android Wear. 

Google has been providing Android Wear UI components via a dedicated Wearable Support Library for a while now, but this is all about to change! 

At this year’s event, Google announced that the various components that make up the Wearable Support Library are going to be deprecated, merged, or migrated into the Android Support Library. In this article, we’ll be taking a look at which components are going to be merged, moved and removed, and how you can start using the Android Support Library’s new Wear module today

We’ll also be looking at some new tools that are designed to make it easier to work with Android Wear’s Complications API.

New Android Wear UI Library 

At this year’s Google I/O, the Android Wear team announced that the bulk of the Wearable Support Library is moving to the Android Support Library. The Wear-specific components will form the basis of a new support-wear module, similar to other modules in the Android Support Library, such as support-recylerview and support-design

According to the Android Wear sessions at Google I/O, we can expect this new Wear module to graduate out of beta at the same time as Android O officially launches.

However, not all components from the Wearable Support Library will be making the move to the Android Support Library. Google also announced that some components from the Wearable Support Library will be:

  • Merged. Components that are applicable to both wearable and handheld devices will be merged into either the Android framework or more generic support modules. Components that are due to be merged include CircledImageView, DelayedConfirmationView, and ActionButton.

  • Deprecated. Google is going to deprecate the Android Wear UI components associated with design patterns that haven’t proven popular with Android Wear users. Specifically, Google will remove the two-dimensional spatial model that allowed Android Wear users to move horizontally and vertically, and will replace it with a vertical LinearLayout. All of the classes associated with the two-dimensional spatial model will be deprecated, including GridViewPager, action buttons, and action layouts.

Although this migration is an ongoing process, Google has already integrated some Android Wear components into version 26.0.0 Beta1 of the Android Support Library.

  • BoxInsetLayout: This is a screen shape-aware FrameLayout that can help you design a single layout that works for both square and round watch faces. When your layout is displayed on a round screen, a BoxInsetLayout will box all its children into an imaginary square in the center of the circular screen. You can specify how your UI elements will be positioned in this center square, using the layout_box attribute. When your app is displayed on a square screen, Android ignores the layout_box attribute and uses a window inset of zero, so your views will be positioned as though they’re inside a regular FrameLayout.

  • SwipeDismissFrameLayoutThis is a layout that you can use to implement custom interactions for your Views and fragments. You’ll generally use SwipeDismissFrameLayout to enable users to dismiss views and fragments by swiping onscreen, essentially replicating the functionality of the Back button found on Android smartphones and tablets.

  • WearableRecyclerViewThis is a Wearable-specific implementation of RecyclerView that helps you design more effective layouts for round displays. The WearableRecyclerView makes more effective use of the curvature of a round screen, and is typically used for implementing curved lists. WearableRecyclerView also gives you the option to use circular scrolling gestures in your app, via its setCircularScrollingGestureEnabled() method.

Adding the New Android Wear Module 

To start using the new Android Wear module, you’ll need to have Android Support Library 26.0.0 Beta1 installed—which leads us on to another Google I/O announcement.

At this year’s event, Google announced that it would be distributing all upcoming versions of the Android Support Library (26.0.0 Beta1 and higher) via the Google Maven repository only.

Downloading the Android Support Library from this repository simply requires you to add the Google Maven Repository to your build.gradle file: 

You can then set up your compile dependencies as usual, so open your wearable module’s build.gradle file and add the Wear library as a project dependency:

To add a component from the Android Wear UI library to your UI, simply open the layout resource file and make sure you use the new, fully-qualified package name. Essentially, this means replacing android.support.wearable.view with android.support.wear.widget. For example, here I’m using the BoxInsetLayout class from the Android Support Library:

To import this class into your Java file, you just need to use the same name, so the old:

Becomes the new:

Easier Integration With the Complications API 

Android Wear users can choose from a huge variety of styles of watch faces, and although the Complications API does give watch faces complete control over how they draw this data, this flexibility can make it difficult to add complications support to your watch faces. 

At this year’s Google I/O, the Android Wear team introduced some additions that should make it easier to work with the Complication API.

ComplicationDrawable

ComplicationDrawable is a new solution that promises to handle all of your complication’s styling and layout for you. 

If you create a ComplicationDrawable but don't set any style parameters, then you'll get a default look, but you can also use the ComplicationDrawable to style every part of your complication, including its background colour, corner radius, and border. 

If your project targets API 24 or higher, then you can define a ComplicationDrawable object by creating a dedicated layout resource file in your project’s /res/drawable folder. 

Open your XML file, and then create a ComplicationDrawable using the following tags:   

Note that attributes defined at the top level apply to both standard and ambient modes, unless you specifically override these attributes in the file’s <ambient> section.

You then need to pass the complication data to your drawable:  

And finally, draw your complication by calling setBounds on your drawable:

TextRenderer

Most complications include some form of text, and TextRenderer is a new class that makes a number of small but powerful adjustments to the way complication text is drawn on the canvas.

You can use TextRenderer to specify the bounds that your complication text has to work with, and TextRenderer will then resize the text or arrange it over several lines, in order to fit this area. In addition, when the screen enters Android Wear’s “always on” ambient mode, TextRenderer adjusts your text by hiding characters and styling that are not suitable for this mode. 

To take advantage of this new class, you need to create a TextRenderer when you initialize your watch face, and then pass it the TextPaint you want to use, which defines style attributes such as the font and text colour:

You need to create a TextRenderer for each field, so you’ll also need to create a TextRenderer for your title text:

When it’s time to draw, you’ll need to set the text on the renderer by calling setText, and then retrieve the text by calling getText

Note that many complications are time-dependent, which is why currentTimeMillis is included in the above code snippet.

Conclusion

In this article, we looked at how to add the new Android Wear UI Library to your project, and how you can start working with a number of components from this library today. We also examined two components that promise to make integrating with Android Wear’s Complications API much easier than it’s previously been.

In the next instalment, we’ll be getting a preview of the up-and-coming features in Android 3.0, by exploring the latest Android Studio Canary build.

In the meantime, check out some of our other tutorials and our video courses on Android app development!

2017-06-05T10:00:00.000Z2017-06-05T10:00:00.000ZJessica Thornsby

Google I/O 2017 Aftermath: What's New for Android Wear?

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-28931

In this series of tips, we’ve been taking a closer look at some of the new Android features and tools announced at this year’s Google I/O.

In this post, we’re going to be focusing on Android Wear. 

Google has been providing Android Wear UI components via a dedicated Wearable Support Library for a while now, but this is all about to change! 

At this year’s event, Google announced that the various components that make up the Wearable Support Library are going to be deprecated, merged, or migrated into the Android Support Library. In this article, we’ll be taking a look at which components are going to be merged, moved and removed, and how you can start using the Android Support Library’s new Wear module today

We’ll also be looking at some new tools that are designed to make it easier to work with Android Wear’s Complications API.

New Android Wear UI Library 

At this year’s Google I/O, the Android Wear team announced that the bulk of the Wearable Support Library is moving to the Android Support Library. The Wear-specific components will form the basis of a new support-wear module, similar to other modules in the Android Support Library, such as support-recylerview and support-design

According to the Android Wear sessions at Google I/O, we can expect this new Wear module to graduate out of beta at the same time as Android O officially launches.

However, not all components from the Wearable Support Library will be making the move to the Android Support Library. Google also announced that some components from the Wearable Support Library will be:

  • Merged. Components that are applicable to both wearable and handheld devices will be merged into either the Android framework or more generic support modules. Components that are due to be merged include CircledImageView, DelayedConfirmationView, and ActionButton.

  • Deprecated. Google is going to deprecate the Android Wear UI components associated with design patterns that haven’t proven popular with Android Wear users. Specifically, Google will remove the two-dimensional spatial model that allowed Android Wear users to move horizontally and vertically, and will replace it with a vertical LinearLayout. All of the classes associated with the two-dimensional spatial model will be deprecated, including GridViewPager, action buttons, and action layouts.

Although this migration is an ongoing process, Google has already integrated some Android Wear components into version 26.0.0 Beta1 of the Android Support Library.

  • BoxInsetLayout: This is a screen shape-aware FrameLayout that can help you design a single layout that works for both square and round watch faces. When your layout is displayed on a round screen, a BoxInsetLayout will box all its children into an imaginary square in the center of the circular screen. You can specify how your UI elements will be positioned in this center square, using the layout_box attribute. When your app is displayed on a square screen, Android ignores the layout_box attribute and uses a window inset of zero, so your views will be positioned as though they’re inside a regular FrameLayout.

  • SwipeDismissFrameLayoutThis is a layout that you can use to implement custom interactions for your Views and fragments. You’ll generally use SwipeDismissFrameLayout to enable users to dismiss views and fragments by swiping onscreen, essentially replicating the functionality of the Back button found on Android smartphones and tablets.

  • WearableRecyclerViewThis is a Wearable-specific implementation of RecyclerView that helps you design more effective layouts for round displays. The WearableRecyclerView makes more effective use of the curvature of a round screen, and is typically used for implementing curved lists. WearableRecyclerView also gives you the option to use circular scrolling gestures in your app, via its setCircularScrollingGestureEnabled() method.

Adding the New Android Wear Module 

To start using the new Android Wear module, you’ll need to have Android Support Library 26.0.0 Beta1 installed—which leads us on to another Google I/O announcement.

At this year’s event, Google announced that it would be distributing all upcoming versions of the Android Support Library (26.0.0 Beta1 and higher) via the Google Maven repository only.

Downloading the Android Support Library from this repository simply requires you to add the Google Maven Repository to your build.gradle file: 

You can then set up your compile dependencies as usual, so open your wearable module’s build.gradle file and add the Wear library as a project dependency:

To add a component from the Android Wear UI library to your UI, simply open the layout resource file and make sure you use the new, fully-qualified package name. Essentially, this means replacing android.support.wearable.view with android.support.wear.widget. For example, here I’m using the BoxInsetLayout class from the Android Support Library:

To import this class into your Java file, you just need to use the same name, so the old:

Becomes the new:

Easier Integration With the Complications API 

Android Wear users can choose from a huge variety of styles of watch faces, and although the Complications API does give watch faces complete control over how they draw this data, this flexibility can make it difficult to add complications support to your watch faces. 

At this year’s Google I/O, the Android Wear team introduced some additions that should make it easier to work with the Complication API.

ComplicationDrawable

ComplicationDrawable is a new solution that promises to handle all of your complication’s styling and layout for you. 

If you create a ComplicationDrawable but don't set any style parameters, then you'll get a default look, but you can also use the ComplicationDrawable to style every part of your complication, including its background colour, corner radius, and border. 

If your project targets API 24 or higher, then you can define a ComplicationDrawable object by creating a dedicated layout resource file in your project’s /res/drawable folder. 

Open your XML file, and then create a ComplicationDrawable using the following tags:   

Note that attributes defined at the top level apply to both standard and ambient modes, unless you specifically override these attributes in the file’s <ambient> section.

You then need to pass the complication data to your drawable:  

And finally, draw your complication by calling setBounds on your drawable:

TextRenderer

Most complications include some form of text, and TextRenderer is a new class that makes a number of small but powerful adjustments to the way complication text is drawn on the canvas.

You can use TextRenderer to specify the bounds that your complication text has to work with, and TextRenderer will then resize the text or arrange it over several lines, in order to fit this area. In addition, when the screen enters Android Wear’s “always on” ambient mode, TextRenderer adjusts your text by hiding characters and styling that are not suitable for this mode. 

To take advantage of this new class, you need to create a TextRenderer when you initialize your watch face, and then pass it the TextPaint you want to use, which defines style attributes such as the font and text colour:

You need to create a TextRenderer for each field, so you’ll also need to create a TextRenderer for your title text:

When it’s time to draw, you’ll need to set the text on the renderer by calling setText, and then retrieve the text by calling getText

Note that many complications are time-dependent, which is why currentTimeMillis is included in the above code snippet.

Conclusion

In this article, we looked at how to add the new Android Wear UI Library to your project, and how you can start working with a number of components from this library today. We also examined two components that promise to make integrating with Android Wear’s Complications API much easier than it’s previously been.

In the next instalment, we’ll be getting a preview of the up-and-coming features in Android 3.0, by exploring the latest Android Studio Canary build.

In the meantime, check out some of our other tutorials and our video courses on Android app development!

2017-06-05T10:00:00.000Z2017-06-05T10:00:00.000ZJessica Thornsby

Get Started With Ionic Services: Deploy

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-28827

One of Ionic's strengths is in the services that it offers on top of the framework. This includes services for authenticating users of your app, push notifications, and analytics. In this series, we're learning about those services by creating apps which make use of them. 

In this post, we're going to take a look at Ionic Deploy. This service allows you to publish changes to your Ionic app without the need for recompiling and re-submitting it to the app store. This is very useful for quickly pushing bug fixes, minor updates and other cosmetic changes to the app. With the "Deploy Channels" feature, you can also perform A/B tests by introducing different code changes to different deploy channels.

Not all changes that you want to introduce to your app can be pushed using Ionic Deploy, though. Only changes to your HTML, CSS, JavaScript, and assets under your www directory can be pushed this way. Binary changes such as updates to the Cordova platform version, Cordova plugins (either updating existing ones or adding new ones), and app assets such as the icon and splash screen cannot be pushed using Ionic Deploy. 

How It Works

In your Ionic app, you can have code that will check for available deployments (updates). By default, it will check for deployments in the production channel. But you can also specify other channels to receive updates from. 

You can then push your changes using the ionic upload command. This will upload your changes to Ionic Cloud. Once they're uploaded, you can choose which channel you wish to deploy to, and whether to deploy now or at a later time. 

Deploying to a channel that your app is monitoring will trigger the code in your app to execute. That code is then responsible for downloading the update, extracting it, and reloading the app to apply the changes.

What You'll Be Building

In this post, you'll be using the command line to push the changes and test if the deploy works as expected. To keep things simple, the changes that we're going to introduce will be mainly to the UI. For every deploy, we're going to change the version number displayed for the app. We're also going to change the image displayed on the app to show that assets can be updated as well.

Setting Up

Now that you have an idea of how Ionic Deploy works and what you will be building, it's time to get your hands dirty by actually creating an app which uses Ionic Deploy. Start by bootstrapping a new Ionic app:

The command above will create a new app using the tabs template. Navigate inside the deployApp directory once it's done installing:

Next, you need to install the @ionic/cloud-angular package. This is the JavaScript library for the Ionic Cloud service. It allows us to interact with the Ionic Deploy service and other Ionic services via JavaScript code.

Once that's done installing, you can now initialize a new Ionic app based on this app. Before you do so, make sure that you already have an Ionic account. The command line tool will prompt you to log in with your Ionic account if you haven't done so already. 

This will create a new app named "deployApp" (or whatever you named your app when you bootstrapped a new Ionic app) under your Ionic apps dashboard

Once you've verified that the app is listed on your Ionic dashboard, go back to the terminal and install the Deploy plugin:

Note that this plugin is the one which actually does the heavy lifting. The @ionic/cloud-angular package simply exposes the APIs required for easily working with the Ionic services inside an Ionic app.

Working With Deploys

Now that you have done all the necessary setup, it's time to add the code for checking and applying updates. But before you do that, first serve the app through your browser:

This allows you to check whether the code that you've added is working or not. This way you can make the necessary corrections as soon as you see an error.

Open the src/app/app.module.ts file. Under the SplashScreen import, import the services needed for working with Ionic Cloud:

Next, add the app_id of your Ionic app. You can find this on your Ionic app dashboard, right below the name of the app.

Once you've added that, you can now include it as one of the modules of the app:

Next, open the src/app/app.component.ts file. Right below the TabsPage import, include the following:

In the constructor(), add the services that we imported earlier:

Setting the Deployment Channel

Since we're still developing the app, set the deployment channel to dev:

Later on, if you want to switch to the production channel, you can simply remove this line as production is the default channel for deployments. If you have created another channel, you can also include its name here.

Working With Snapshots

You can access an array of snapshots that have been previously downloaded by the app. The snapshots variable is an array containing the IDs of each of the snapshots.

We won't really be using snapshots for this app, but it's good to know that the app is storing this type of information for later use. In the example below, we'll go through the list of old snapshots and delete each one to free up some space on the device.

Checking for Updates

To check for updates, use the check() method. This returns a boolean value that tells you whether a new snapshot is available or not. By default, the latest deploy will create a new snapshot. So only the latest deploy will be applied if you pushed two updates consecutively.

If a snapshot is available for download, you can get additional information about it by calling the getMetaData() method. This metadata can be added to a deploy through the Ionic app dashboard. Key-value pairs can be added here, and each of them becomes available as a property for the metadata object. Later on, we will be using metadata to customize the messages shown in the app every time a new update becomes available.

Next, show a confirmation alert message to let the user decide whether they want to download the update or not:

You might be concerned that this would annoy the user if they continue to receive the prompt to update their app if they keep responding "No". More often that not, though, this is actually a good thing. There shouldn't be any reason for a user to reject an update if it's going to improve their experience. 

Downloading and Applying Updates

When the user agrees, you can go ahead and download the update. This may take a while depending on your internet connection and your device. Once the update is downloaded, show a loader to attract the user's attention while it extracts. Once it's extracted, reload the app and hide the loader.

Take a look at what the updated app.components.ts file should look like after all those changes.

Installing the App on the Device

Now that the code for checking deploys is added, we can build the app and install it on a device. The rest of the changes that we're going to introduce will be primarily pushed through the Ionic Deploy service. 

Go ahead and add the android platform to your Ionic project and build the .apk file with the following commands:

This will create an android-debug.apk file inside the platforms/android/build/outputs/apk folder. Copy it to your device and install it.

Pushing Changes

Now it's time for us to push some changes to the app. To try it out, just make some minor changes to the app UI. And now you can upload the changes:

Adding Metadata

Once it's done uploading, a new entry will be listed in your Recent Activity. Click the Edit link of that entry. This will allow you to add a note, versioning information and metadata to that specific release. It's always a good idea to add a note so you know what that specific release is all about. Once you've done so, click on the Metadata tab and add the following:

add metadata

Then click on the Save button to commit your changes. Finally, click on the Deploy button to deploy the release. Once the app picks up on the change, the metadata that you supplied also becomes available. 

You can see that it now shows the version number of the release:

version number

Versioning

Sometimes you will push an update out with Ionic Deploy, but also rebuild and ship those packages to the bundled app in the App Store. Watch out, though, because Ionic doesn't know that your app already contains that update, and your app will prompt the user to download the latest updates the first time your app is run.

The versioning feature can help prevent this. With the versioning feature, you can specify the version of the app which can receive the updates:

  • Minimum: deploys only if the current app version is higher or equal to this version.
  • Maximum: deploys only if the current app version is equal or lower than this version.
  • Equivalent: do no perform a deploy if the current app version is equal to this version.

You can add versioning information by clicking on the EDIT link on a specific release, and then going to the VERSIONING tab. From there, you can specify the versions (either iOS or Android) that you want to target.

Versioning

What Ionic does is compare this version with the one that you specified in your config.xml file:

If the app version falls between the minimum and maximum specified, the release is picked up. And if the app version is equal to the equivalent version value, the release is ignored. So for the above screenshot, if the version indicated in the config.xml file is 0.0.1, the release will be ignored by the app.

Asset Updates

The next change that we're going to make is to show an image.

The first thing that you need to do is find an image. Put it inside the src/assets/img folder and link it from the src/pages/home/home.html file:

Upload your changes as a new release to Ionic Deploy.

Once uploaded, go to your Ionic app dashboard and update the release with a note and the corresponding version in the metadata. Save the changes and deploy it.

Opening the app should now pick up this new release, and updating it would apply the changes to the UI.

asset changes

Deploy Channels

By default, Ionic Deploy has three channels which you can deploy to: dev, staging, and production. But you can also create new channels for your app to listen for updates on. You can do that by clicking on the gear icon on the Ionic Deploy tab on your app dashboard:

Create Deploy Channel

This is useful for things like A/B testing, so you can push specific changes to specific users only.

Don't forget to update your code to use that specific deploy channel:

Rollback

If you've pushed something you shouldn't have, you could use the rollback feature. With this feature, you can push a previous release back to your users. 

Note that you can't fix broken code by rolling back! For example, if you have a syntax error in your JavaScript code, it will break the whole app and the code for checking for updates won't ever run. To fix those kinds of errors, the only thing you can do is release a new version on the app store. 

You can rollback by clicking on the Roll back to here link on any given deploy. 

Rollback

This will ask you to confirm whether you want to roll back or not. Once confirmed, it will be automatically set as the current release. So the code for picking up new releases will recognize it as the latest release and will prompt users to update. Rolled back releases will have an orange refresh icon.

You can also deploy a specific release by clicking on the Deploy link beside the release that you want to deploy.

Using Git Hooks

You can automate the deployment of app updates on Ionic Deploy with Git hooks. Git hooks allow you to execute scripts before or after specific Git events such as commit, push, and receive. In this case we will be using the pre-push hook to upload our changes to Ionic Cloud right before the git push command does its thing. 

Start by renaming the sample pre-push script to the actual name recognized by Git:

Open the file in your text editor and replace its contents with the following:

Now commit your changes and push them to a remote repo:

Right before the git push command is executed, ionic upload will be executed. 

You can also automatically deploy the release if you want:

This won't work for our example, though, because you can't specify metadata!

If you want to take the deployment process further, I recommend you check out the HTTP API for Ionic Deploy. This allows you to programmatically deploy changes to your app from your Continuous Integration server. It allows you to update the version numbers and metadata on your deployments as well. All of this can be done automatically and without ever touching the Ionic app dashboard.

Conclusion

That's it! In this tutorial you've learned about Ionic Deploy and how you can use it to push updates to your Ionic app. This is a powerful feature which allows you to build a robust versioning and update system into your app.

Stay tuned for more content on Ionic and on cloud services like Ionic Deploy! If you want a complete introduction to getting started with Ionic 2 app development, check out our course here on Envato Tuts+.

And check out some of our other posts on Ionic and cross-platform mobile app development.

2017-06-06T13:36:21.000Z2017-06-06T13:36:21.000ZWernher-Bel Ancheta

WWDC 2017 Aftermath: The Most Important Announcements

$
0
0

That was quite the keynote. Don't you think? Nobody knew what to expect, due to the absence of rumors or leaks. But I think I speak for many Apple developers when I say that it was a great keynote.

What surprised me most was the fast pace. It was clear that Apple had a lot to announce to developers and the press. Let's take a look at the most important announcements. 

tvOS and watchOS

Apple spent the first few minutes of the keynote discussing tvOS 11 and watchOS 4. The changes introduced in tvOS 11 are minor. The most important announcement was Amazon finally bringing Amazon Prime Video to Apple TV. While this is great for consumers, it’s not what you and I are most interested in when we watch a WWDC keynote.

The changes coming to watchOS 4 are more interesting. The fourth major release of watchOS is a minor update that primarily focuses on refining the operating system. The focus is on speed and reliability. But it also introduces a few new features, such as a watch face powered by Siri, several new fitness features, and better support for Apple Music.

watchOS 4 has better support for Siri and Apple Music

Third party applications now have direct access to Core Bluetooth, which means it’s no longer necessary for your Apple Watch to be paired with your iPhone to communicate with Bluetooth devices. This is going to be very useful for devices with embedded sensors. Apple also announced new background modes for applications and person to person payments with Apple Pay.

macOS

The next major release of macOS is named High Sierra. Many of us watching the keynote thought Craig Federighi was cracking a joke. But he wasn’t. As with previous releases (Lion and Mountain Lion, Yosemite and El Capitan), the name suggests that with High Sierra, Apple is focused on improving the operating system under the hood. And that's exactly what they've done.

The next major release of macOS is named High Sierra

High Sierra introduces a range of changes and improvements, including the Apple File System (APFS), proper support for virtual reality (VR), and support for external graphical processing. While it’s unclear how big the latter is going to be, Apple also announced an External Graphics Development Kit for developers who want to experiment with this technology. External graphical processing is supported by any Mac with a Thunderbolt 3 connection.

Apples External Graphics Development Kit

Virtual reality and graphics processing was an important focus during the keynote. Apple announced Metal 2 for graphics acceleration, Core ML (Machine Learning), and an improved and faster Safari with increased privacy.

But Apple didn’t only announce software. The company presented updates to their notebook and desktop lineup, including the introduction of the rumored iMac Pro, the most powerful Mac the company has ever shipped.

iMac Pro

This powerhouse is due for December, but what the company revealed during the keynote looks very, very promising. Did I mention it comes in space gray, including mouse and keyboard?

iOS

Apple’s flagship operating system received most of the attention. The announcements were even a bit overwhelming. The most important announcements related to iPad. The company not only announced a new version of its popular iPad Pro with a brand new 10.5” display, it also showed off several features that take multitasking on iPad to the next level. Drag and drop was probably the most compelling. This is going to be a game changer for users that rely heavily on getting work done on their iPad.

Drag and Drop on iOS 11

Apple also introduced Finder for iOS. Sorry. I mean Files for iOS. Let’s be honest, Files looks and feels very much like Finder on macOS. But it’s better in several ways. Because Files is backed by Apple’s iCloud services, features like synchronization of favorites are built in.

The Dock on iPad is now more powerful and can be used for quickly switching between applications and interacting with other applications. It even includes a section that predicts which applications you might use next.

iPad Dock

Interesting to developers are the introduction of ARKit and Core ML, also available on macOS. Apple is a big proponent of augmented reality and, with ARKit, it hands developers the tools and resources to bring augmented reality to third party applications.

Augmented reality isn’t new, but it’s hard to implement and get right. ARKit aims to solve many common obstacles developers face, allowing them to focus on building features instead of toiling with technical challenges.

Like many other technology companies, Apple strongly believes in the future of machine learning. The Core ML framework provides developers with the tools to integrate machine learning into their applications. For many developers, this is a whole new world that opens up. It could take many third party applications from good to great if implemented smartly.

App Store

It’s clear Apple is listening to the feedback of its customers and its developer community. The company is committed to making the App Store better and it does this in iOS 11 by overhauling the design of its App Store.

It no longer looks like the App Store you've known for the past nine years. On iOS 11, it feels more like a high-end store or a magazine, featuring high-quality products. It’s too early to tell what impact this change is going to have for companies and developers, but it sure looks promising.

The App Store received a design overhaul

The new design features a Today tab, putting one application in the spotlight. It probably won’t surprise you that games have a special spot in Apple's new App Store. It also features how-to articles, teaching customers more about new applications, and Apple also touts that search has improved substantially.

The redesigned product pages are crisp and clean. Developers can now add a subtitle to their product page, which will hopefully get rid of applications that have names stuffed with keywords—that's most likely Apple’s goal with this addition.

Customers can purchase in-app purchases from the product page of an application. You no longer need to search an application for its in-app purchases.

Hardware

Hardware announcements are for developers almost as important as software announcements. This year's keynote was packed with new hardware.

iPad Pro

The introduction of a brand new iPad Pro shows Apple’s commitment to continue to invest in what they believe to be the next generation of computing devices. The company firmly believes most people no longer need a notebook or a desktop to get work done.

The new iPad Pro sports an improved 10.5" Retina display, increasing the surface area by 20% compared to the previous 9.7" model. The 12.9" model sticks around and also sports the improved display. The iPad Pro is brighter, displays an even wider range of colors, and is less reflective. Both devices are powered by the A10X Fusion chip, bringing even more power to these mobile powerhouses.

Apples New iPad Pro

But the iPad Pro truly shines if combined with Apple’s flagship operating system, iOS 11. As I mentioned earlier, the next release introduces several major improvements to multitasking—such as drag and drop between applications, an improved Dock, and the Files application.

Apple Pencil is the cherry on the cake. The operating system now has wider support for Apple Pencil. For example, Apple's Notes application has improved its support for Apple Pencil and handwritten notes are now searchable thanks to the integration of machine learning.

Mac

Apple received quite a bit of flak after the introduction of the new MacBook Pro late last year, especially from its professional customers. But it seems Apple has been hard at work to make sure its professional userbase knows the company hasn’t forgotten about the Mac.

Earlier this year, Apple revealed that it’s working hard on the next generation Mac Pro and during yesterday’s keynote, the company announced significant updates to its iMac and notebook line.

The company's popular iMac line is finally ready for virtual reality. A demo by John Knoll got rid of any doubt we might have. But the company didn’t only introduce updates; it also announced a new model of the iMac, the iMac Pro. This powerhouse is due for December and, if Apple delivers on their promise, it’s going to be the most powerful Mac the company has ever shipped.

HomePod

The last announcement of the company was the introduction of a brand new product, HomePod. A powerful speaker that directly competes with companies like Sonos. But Apple also takes on Google, Microsoft, and Amazon through HomePod's Siri integration. That's right. You can talk to this speaker.

HomePod

HomePod is planned for a December release in the United States, the United Kingdom, and Australia. But what’s different about HomePod? How does it differ from Sonos? And what sets it apart from Google Home, Microsoft’s Cortana, and Amazon’s Alexa? According to Apple, it combines amazing sound with smarts—adding intelligence to a powerful speaker.

It shouldn’t surprise you that it looks very nice and will fit nicely in your living room. It’s powered by the company’s A8 chip and sports a collection of tweeters, several microphones, and a woofer. While it looks promising, Apple still has a lot of work before it's ready for prime time. And that includes making Siri as good and reliable as the virtual assistants of the company's competition.

Xcode and Swift

You may be wondering why I haven't mentioned Xcode or Swift. While the developer tools usually receive little attention during the keynote, it's clear Apple has been hard at work on Xcode 9, the next major release of the company's IDE.

The source editor, for example, has been rebuilt in Swift from the ground up. Refactoring has always been a mediocre experience in Xcode. That's going to change with Xcode 9. The new toolchain is supposed to be fast, reliable, and intelligent. I can’t wait to try this out.

Xcode 9 adds improved support for Swift. If you've worked with Swift, then you understand that this isn't just a nice-to-have—it's essential. Problems with Swift support have been plaguing developers ever since the language was introduced several years ago. Apple promises that is  a thing of the past.

The Next Major Release of Xcode

But Xcode 9 is more than a slew of updates and improvements. How does wireless debugging sound? It’s now possible to debug your iOS and tvOS devices over the network, Wi-Fi and wired. 

There’s so much to cover and I’m only scratching the surface. Other nice additions are named colors in asset catalogs, support for HEIF images (a new format that reduces image size), a brand new Core ML editor, and, last but not least, the ability to run multiple simulators side by side.

Because Swift is open source, we already know what’s coming. The fourth major release of Swift includes an overhaul of the String structure, the Codable protocol for native encoding and decoding of types, better support for key paths, and much more.

We will be covering more about the WWDC announcements in the coming weeks. Make sure to watch the Platforms State of the Union. That should whet your appetite.

2017-06-07T03:55:51.011Z2017-06-07T03:55:51.011ZBart Jacobs

WWDC 2017 Aftermath: The Most Important Announcements

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-28995

That was quite the keynote, don't you think? Nobody knew what to expect, due to the absence of rumors or leaks. But I think I speak for many Apple developers when I say that it was a great keynote.

What surprised me most was the fast pace. It was clear that Apple had a lot to announce to developers and the press. Let's take a look at the most important announcements. 

tvOS and watchOS

Apple spent the first few minutes of the keynote discussing tvOS 11 and watchOS 4. The changes introduced in tvOS 11 are minor. The most important announcement was Amazon finally bringing Amazon Prime Video to Apple TV. While this is great for consumers, it’s not what you and I are most interested in when we watch a WWDC keynote.

The changes coming to watchOS 4 are more interesting. The fourth major release of watchOS is a minor update that primarily focuses on refining the operating system. The focus is on speed and reliability. But it also introduces a few new features, such as a watch face powered by Siri, several new fitness features, and better support for Apple Music.

watchOS 4 has better support for Siri and Apple Music

Third-party applications now have direct access to Core Bluetooth, which means it’s no longer necessary for your Apple Watch to be paired with your iPhone to communicate with Bluetooth devices. This is going to be very useful for devices with embedded sensors. Apple also announced new background modes for applications and person-to-person payments with Apple Pay.

macOS

The next major release of macOS is named High Sierra. Many of us watching the keynote thought Craig Federighi was cracking a joke. But he wasn’t. As with previous releases (Lion and Mountain Lion, Yosemite and El Capitan), the name suggests that with High Sierra, Apple is focused on improving the operating system under the hood. And that's exactly what they've done.

The next major release of macOS is named High Sierra

High Sierra introduces a range of changes and improvements, including the Apple File System (APFS), proper support for virtual reality (VR), and support for external graphical processing. While it’s unclear how big the latter is going to be, Apple also announced an External Graphics Development Kit for developers who want to experiment with this technology. External graphical processing is supported by any Mac with a Thunderbolt 3 connection.

Apples External Graphics Development Kit

Virtual reality and graphics processing were important areas of focus during the keynote. Apple announced Metal 2 for graphics acceleration, Core ML (Machine Learning), and an improved and faster Safari with increased privacy.

But Apple didn’t only announce software. The company also presented updates to its notebook and desktop lineup, including the introduction of the rumored iMac Pro, the most powerful Mac the company has ever shipped.

iMac Pro

This powerhouse is due for December, but what the company revealed during the keynote looks very, very promising. Did I mention it comes in space gray, including mouse and keyboard?

iOS

Apple’s flagship operating system received most of the attention. The announcements were even a bit overwhelming. The most important announcements related to iPad. The company not only announced a new version of its popular iPad Pro with a brand new 10.5” display, it also showed off several features that take multitasking on iPad to the next level. Drag and drop was probably the most compelling. This is going to be a game changer for users who rely heavily on getting work done on their iPad.

Drag and Drop on iOS 11

Apple also introduced Finder for iOS. Sorry. I mean Files for iOS. Let’s be honest, Files looks and feels very much like Finder on macOS. But it’s better in several ways. Because Files is backed by Apple’s iCloud services, features like synchronization of favorites are built in.

The Dock on iPad is now more powerful and can be used to switch quickly between applications and to interact with other applications. It even includes a section that predicts which applications you might use next.

iPad Dock

Interesting to developers are the introduction of ARKit and Core ML, also available on macOS. Apple is a big proponent of augmented reality and, with ARKit, it hands developers the tools and resources to bring augmented reality to third-party applications.

Augmented reality isn’t new, but it’s hard to implement and get right. ARKit aims to solve many common obstacles developers face, allowing them to focus on building features instead of toiling with technical challenges.

Like many other technology companies, Apple strongly believes in the future of machine learning. The Core ML framework provides developers with the tools to integrate machine learning into their applications. For many developers, this is a whole new world that opens up. It could take many third-party applications from good to great if implemented smartly.

App Store

It’s clear Apple is listening to the feedback of its customers and its developer community. The company is committed to making the App Store better, and it does this in iOS 11 by overhauling the design of its App Store.

It no longer looks like the App Store you've known for the past nine years. On iOS 11, it feels more like a high-end store or a magazine, featuring high-quality products. It’s too early to tell what impact this change is going to have for companies and developers, but it sure looks promising.

The App Store received a design overhaul

The new design features a Today tab, putting one application in the spotlight. It probably won’t surprise you that games have a special spot in Apple's new App Store. It also features how-to articles, teaching customers more about new applications, and Apple also touts that search has improved substantially.

The redesigned product pages are crisp and clean. Developers can now add a subtitle to their product page, which will hopefully get rid of applications that have names stuffed with keywords—that's most likely Apple’s goal with this addition.

Customers can purchase in-app purchases from the product page of an application. You no longer need to search an application for its in-app purchases.

Hardware

Hardware announcements are for developers almost as important as software announcements. This year's keynote was packed with new hardware.

iPad Pro

The introduction of a brand new iPad Pro shows Apple’s commitment to continuing to invest in what they believe to be the next generation of computing devices. The company firmly believes most people no longer need a notebook or a desktop to get work done.

The new iPad Pro sports an improved 10.5" Retina display, increasing the surface area by 20% compared to the previous 9.7" model. The 12.9" model sticks around and also sports the improved display. The iPad Pro is brighter, displays an even wider range of colors, and is less reflective. Both devices are powered by the A10X Fusion chip, bringing even more power to these mobile powerhouses.

Apples New iPad Pro

But the iPad Pro truly shines if combined with Apple’s flagship operating system, iOS 11. As I mentioned earlier, the next release introduces several major improvements to multitasking—such as drag and drop between applications, an improved Dock, and the Files application.

Apple Pencil is the cherry on the cake. The operating system now has wider support for Apple Pencil. For example, Apple's Notes application has improved its support for Apple Pencil, and handwritten notes are now searchable thanks to the integration of machine learning.

Mac

Apple received quite a bit of flak after the introduction of the new MacBook Pro late last year, especially from its professional customers. But it seems Apple has been hard at work to make sure its professional user base knows the company hasn’t forgotten about the Mac.

Earlier this year, Apple revealed that it’s working hard on the next generation Mac Pro, and during yesterday’s keynote, the company announced significant updates to its iMac and notebook line.

The company's popular iMac line is finally ready for virtual reality. A demo by John Knoll got rid of any doubt we might have. But the company didn’t only introduce updates; it also announced a new model of the iMac, the iMac Pro. This powerhouse is due for December and, if Apple delivers on its promise, it’s going to be the most powerful Mac the company has ever shipped.

HomePod

The last announcement of the company was the introduction of a brand new product, HomePod: a powerful speaker that directly competes with companies like Sonos. But Apple also takes on Google, Microsoft, and Amazon through HomePod's Siri integration. That's right. You can talk to this speaker.

HomePod

HomePod is planned for a December release in the United States, the United Kingdom, and Australia. But what’s different about HomePod? How does it differ from Sonos? And what sets it apart from Google Home, Microsoft’s Cortana, and Amazon’s Alexa? According to Apple, it combines amazing sound with smarts—adding intelligence to a powerful speaker.

It shouldn’t surprise you that it looks very nice and will fit nicely in your living room. It’s powered by the company’s A8 chip and sports a collection of tweeters, several microphones, and a woofer. While it looks promising, Apple still has a lot of work before it's ready for prime time. And that includes making Siri as good and reliable as the virtual assistants of the company's competition.

Xcode and Swift

You may be wondering why I haven't mentioned Xcode or Swift. While the developer tools usually receive little attention during the keynote, it's clear Apple has been hard at work on Xcode 9, the next major release of the company's IDE.

The source editor, for example, has been rebuilt in Swift from the ground up. Refactoring has always been a mediocre experience in Xcode. That's going to change with Xcode 9. The new toolchain is supposed to be fast, reliable, and intelligent. I can’t wait to try this out.

Xcode 9 adds improved support for Swift. If you've worked with Swift, then you understand that this isn't just a nice-to-have—it's essential. Problems with Swift support have been plaguing developers ever since the language was introduced several years ago. Apple promises that is a thing of the past.

The Next Major Release of Xcode

But Xcode 9 is more than a slew of updates and improvements. How does wireless debugging sound? It’s now possible to debug your iOS and tvOS devices over the network, Wi-Fi and wired. 

There’s so much to cover, and I’m only scratching the surface. Other nice additions are named colors in asset catalogs, support for HEIF images (a new format that reduces image size), a brand new Core ML editor, and, last but not least, the ability to run multiple simulators side by side.

Because Swift is open source, we already know what’s coming. The fourth major release of Swift includes an overhaul of the String structure, the Codable protocol for native encoding and decoding of types, better support for key paths, and much more.

We will be covering more about the WWDC announcements in the coming weeks. Make sure to watch the Platforms State of the Union. That should whet your appetite.

2017-06-07T03:55:51.011Z2017-06-07T03:55:51.011ZBart Jacobs

Google I/O 2017 Aftermath: What's New in Android Studio 3?

$
0
0

In this series of tips, we’ve been taking a closer look at some of the new Android features and tools announced at this year’s Google I/O, that you can get your hands on today.

In this post, we’re going to get some hands-on experience with the major new features coming up in Android Studio 3, by exploring the Android Studio 3.0 Preview.

If you haven’t already, you can download the Preview from the official Android website. Just note that this is an early access release, so it’s not recommended to use it for your day-to-day development work. 

Built-In Support for Kotlin

One of the most exciting Android announcements from this year’s Google I/O keynote is that Google is making Kotlin a first-class language for Android development. 

Although you could previously add Kotlin support to Android Studio via a plugin, Android Studio 3.0 will have Kotlin support built in, making it even easier to start using Kotlin for Android development.

There are three ways that you can start using Kotlin in the Android Studio 3.0 Preview:

Start a New Project With Kotlin

First, if you’re creating a new project then the project creation wizard now features an Include Kotlin Support checkbox.

When youre creating a project you can select the Include Kotlin Support checkbox

When you select this option, Android Studio generates all the code your project needs to support Kotlin. If you open your project-level build.gradle file, you’ll see that the version of Kotlin you’re using has been added to the buildscript section:

And if you open your module-level build.gradle file, you’ll notice that some Kotlin-specific lines have been added here, too:

Convert Existing Java Files to Kotlin

The second method is to convert an existing Java file into a Kotlin file:

  • Select the file you want to convert in Android Studio’s Project view.
  • Select Code > Convert Java file to Kotlin file from the Android Studio toolbar. This runs the Java file through a converter, generating the equivalent Kotlin code.
  • At this point, Android Studio will display a banner informing you that Kotlin isn’t configured in your project. Click the Configure link that appears in this banner.
  • Select Android with Gradle.
  • Choose from All modules, All modules containing Kotlin files, or select the specific module where you want to support Kotlin. 
  • Click OK.

Add a Kotlin Class to an Existing Project

The final method is to create a new Kotlin class, by Control-clicking the directory where you want to create your class, and then selecting New > Kotlin file / class. Again, if your project isn’t configured to support Kotlin, then Android Studio will display the Configure banner.

And, if you’re not familiar with Kotlin and want to find out what all the fuss is about, then we’ve published a whole series walking you through the process of getting started with Kotlin for Android development.

A New Android Profiler 

Android Studio 3.0 Preview replaces the familiar Android Monitor window with a brand-new Android Profiler.

To take a look at this new tool, select View > Tool Windows > Android Profiler from the Android Studio toolbar, or click the Android Profiler tab that appears along the bottom of the IDE window.

Similar to Android Monitor, the Android Profiler can only communicate with a running app, so make sure the app you want to test is running on an AVD or a connected smartphone or tablet, and that it’s currently visible onscreen. Select the device and the process you want to profile, using the dropdown menus.

As soon as you’ve selected a process, the Android Profiler attaches to that process and displays a timeline of your app’s Network, CPU and Memory usage, which updates in real time. 

The Android Profiler displays three timelines CPU Memory and Network

To view more information about Network, CPU or Memory, simply click that section of the Android Profiler, which launches a new profiler dedicated entirely to your chosen topic. 

Network Profiler

This Profiler displays a timeline of your network activity, displaying data sent and received, and the current number of connections. Note that the Network Profiler currently only supports the HttpURLConnection and OkHttp libraries, so you may be unable to view your app’s network activity if you’re using a different library.

CPU Profiler

This Profiler displays your app’s CPU usage and thread activity. You can also see exactly which methods are being executed and the CPU resources each method consumes, by recording a method trace. 

To record a trace, open the dropdown menu and select either Sampled or Instrumented, and then click the Record button. Spend some time interacting with your app, making sure to perform the actions you want to record, and then click the Stop recording button. The CPU Profiler will then display all the data recorded during this sampling period.

Memory Profiler

This Profiler helps you identify memory leaks, memory churn and undesirable memory allocation patterns, by displaying a graph of your app’s memory use. You can also use the Memory Profiler to capture a heap dump, which provides a snapshot of the objects your app has allocated, along with how much memory each object is using and where references to each object are being held in your code. Finally, you can record your app’s memory allocations, by clicking the Record memory allocations button.  

Create Standalone Instant App Modules

Android Instant Apps allow users to run applications instantly via a URL, without having to install the application first. This feature allows you to make your app’s most important features available to more users—while hopefully tempting them into downloading the full version of your app in the process. 

The first step to adding Android Instant App functionality to your project is to break your app into smaller modules, so users have the option to download a specific portion of your project. Since breaking your app into multiple, standalone modules isn’t exactly an easy task, Android Studio 3.0 Preview introduces a feature to help you modularize any class within your application: 

  • Open the class you want to modularize, and highlight the class name.
  • Control-click the class, and then select Refactor > Modularize.
Control-click a class and select Refactor  Modularize from the dropdown that appears
  • Select Preview to see the exact classes, methods and resources that are going to be incorporated into this new module.
  • If required, deselect some of the items you don’t want to include in this module. If you do remove one or more items, then you’ll typically need to spend some time adjusting the resulting module’s code, to make sure it functions correctly.
  • When you’re happy with your selection, go ahead and create your module by clicking OK.

Improved Java 8 Support 

Android Studio 3.0 Preview 1 provides built-in support for a subset of Java 8 language features and the third-party libraries that use them, specifically:

  • Lambda expressions
  • Method References
  • Type Annotations
  • Default and static interface methods
  • Repeating annotations

In addition, the following Java 8 features are compatible with API level 24 and higher: 

  • java.lang.annotation.Repeatable
  • java.util.function
  • java.lang.reflect.Method.isDefault()
  • java.lang.FunctionalInterface
  • java.util.stream
  • annotatedElement.getAnnotationsByType(Class)

To take advantage of this improved Java 8 support, you’ll need to update to version 3.0.0-alpha1 (or higher) of the Gradle plugin. Start by opening your gradle-wrapper.properties file and updating the distributionUrl:

Next, open your project-level build.gradle file and make sure you’re using Google’s new Maven repository. You’ll also need to update to version 3.0.0-alpha1 of the Gradle plugin: 

If you’ve previously enabled the Jack compiler, then you’ll need to disable it in order to take advantage of Android Studio’s improved Java 8 support. To remove Jack, open your module-level build.gradle file and delete the jackOptions block:

Finally, you’ll need to add the Java 8 compileOptions block to your build.gradle file, if you haven’t already:  

Custom Fonts Made Even Easier

Google is about to make it much easier to add custom fonts to your app, by upgrading fonts to a fully-supported resource type in Android O. We’ve already explored working with custom fonts in detail, but the Android Studio 3.0 Preview adds a handy feature that makes it even easier to browse for custom fonts and add them to your project:

  • Open any layout resource file that contains a TextView.
  • Select the Design tab.
  • In the layout editor, select the TextView. The Properties menu should open along the left-hand side of the Android Studio window.
  • Scroll to the menu’s textAppearance section, and then click its accompanying arrow icon to expand this section. Open the fontFamily dropdown, and select More fonts. This opens a window where you can browse a library of fonts that are available to download.
In the Properties menu expand the textAppearance section and then open the subsequent fontFamily dropdown
  • To add a font to your project, select it and then click OK
  • Open your project’s res/font folder, and you’ll see that this font has been added to your project, ready for you to use.
  • To apply this font to any piece of text, simply add the attribute android:fontFamily="@font/name-of-your-font.” 

Other Notable Updates

Android Studio 3.0 Preview also introduces some useful new tools: 

APK Debugger

This tool makes it easier to profile and debug APKs—simply select File > Profile or debug APK from the Android Studio toolbar, and then select the APK you want to take a closer look at. Alternatively, select Profile or debug APK from Android Studio’s Welcome screen.

Device File Explorer

You can use this tool to interact with the connected device’s file system, allowing you to view, copy and delete files, and also upload files to your Android device. To use this tool, either select the Device File Explorer tab towards the bottom-right of the Android Studio screen, or select View > Tool Windows > Device File Explorer from the Android Studio toolbar.

Adaptive Icon Wizard

In Android O, Original Equipment Manufacturers will be able to apply a mask to all the application launcher icons across their device. To make sure your launcher icon displays correctly regardless of the mask being used, you’ll need to provide an adaptive launcher icon. 

We’ve explored creating adaptive icons previously, but the new Android Studio Preview introduces a dedicated wizard that makes it easier to build these adaptive icons. To launch the wizard, Control-click your project’s res folder and select New > Image Asset. In the window that appears, open the Icon type dropdown, and set it to Launcher Icons (Adaptive and Legacy). You can then build your adaptive icon by selecting a foreground and background layer.

Android Studios Image Asset window will walk you through the process of creating an adaptive icon

Conclusion

In this tip, we explored some of the most exciting new tools and features already available in Android Studio 3.0 Preview, including built-in support for the Kotlin programming language, improved Java 8 support, and the all-new Android Profiler. With all the new features and tools available, Android app development is about to get even more exciting!

While you're here, check out some of our other tutorials and our video courses on Android app development!

2017-06-08T10:43:31.000Z2017-06-08T10:43:31.000ZJessica Thornsby

Google I/O 2017 Aftermath: What's New in Android Studio 3?

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-28937

In this series of tips, we’ve been taking a closer look at some of the new Android features and tools announced at this year’s Google I/O, that you can get your hands on today.

In this post, we’re going to get some hands-on experience with the major new features coming up in Android Studio 3, by exploring the Android Studio 3.0 Preview.

If you haven’t already, you can download the Preview from the official Android website. Just note that this is an early access release, so it’s not recommended to use it for your day-to-day development work. 

Built-In Support for Kotlin

One of the most exciting Android announcements from this year’s Google I/O keynote is that Google is making Kotlin a first-class language for Android development. 

Although you could previously add Kotlin support to Android Studio via a plugin, Android Studio 3.0 will have Kotlin support built in, making it even easier to start using Kotlin for Android development.

There are three ways that you can start using Kotlin in the Android Studio 3.0 Preview:

Start a New Project With Kotlin

First, if you’re creating a new project then the project creation wizard now features an Include Kotlin Support checkbox.

When youre creating a project you can select the Include Kotlin Support checkbox

When you select this option, Android Studio generates all the code your project needs to support Kotlin. If you open your project-level build.gradle file, you’ll see that the version of Kotlin you’re using has been added to the buildscript section:

And if you open your module-level build.gradle file, you’ll notice that some Kotlin-specific lines have been added here, too:

Convert Existing Java Files to Kotlin

The second method is to convert an existing Java file into a Kotlin file:

  • Select the file you want to convert in Android Studio’s Project view.
  • Select Code > Convert Java file to Kotlin file from the Android Studio toolbar. This runs the Java file through a converter, generating the equivalent Kotlin code.
  • At this point, Android Studio will display a banner informing you that Kotlin isn’t configured in your project. Click the Configure link that appears in this banner.
  • Select Android with Gradle.
  • Choose from All modules, All modules containing Kotlin files, or select the specific module where you want to support Kotlin. 
  • Click OK.

Add a Kotlin Class to an Existing Project

The final method is to create a new Kotlin class, by Control-clicking the directory where you want to create your class, and then selecting New > Kotlin file / class. Again, if your project isn’t configured to support Kotlin, then Android Studio will display the Configure banner.

And, if you’re not familiar with Kotlin and want to find out what all the fuss is about, then we’ve published a whole series walking you through the process of getting started with Kotlin for Android development.

A New Android Profiler 

Android Studio 3.0 Preview replaces the familiar Android Monitor window with a brand-new Android Profiler.

To take a look at this new tool, select View > Tool Windows > Android Profiler from the Android Studio toolbar, or click the Android Profiler tab that appears along the bottom of the IDE window.

Similar to Android Monitor, the Android Profiler can only communicate with a running app, so make sure the app you want to test is running on an AVD or a connected smartphone or tablet, and that it’s currently visible onscreen. Select the device and the process you want to profile, using the dropdown menus.

As soon as you’ve selected a process, the Android Profiler attaches to that process and displays a timeline of your app’s Network, CPU and Memory usage, which updates in real time. 

The Android Profiler displays three timelines CPU Memory and Network

To view more information about Network, CPU or Memory, simply click that section of the Android Profiler, which launches a new profiler dedicated entirely to your chosen topic. 

Network Profiler

This Profiler displays a timeline of your network activity, displaying data sent and received, and the current number of connections. Note that the Network Profiler currently only supports the HttpURLConnection and OkHttp libraries, so you may be unable to view your app’s network activity if you’re using a different library.

CPU Profiler

This Profiler displays your app’s CPU usage and thread activity. You can also see exactly which methods are being executed and the CPU resources each method consumes, by recording a method trace. 

To record a trace, open the dropdown menu and select either Sampled or Instrumented, and then click the Record button. Spend some time interacting with your app, making sure to perform the actions you want to record, and then click the Stop recording button. The CPU Profiler will then display all the data recorded during this sampling period.

Memory Profiler

This Profiler helps you identify memory leaks, memory churn and undesirable memory allocation patterns, by displaying a graph of your app’s memory use. You can also use the Memory Profiler to capture a heap dump, which provides a snapshot of the objects your app has allocated, along with how much memory each object is using and where references to each object are being held in your code. Finally, you can record your app’s memory allocations, by clicking the Record memory allocations button.  

Create Standalone Instant App Modules

Android Instant Apps allow users to run applications instantly via a URL, without having to install the application first. This feature allows you to make your app’s most important features available to more users—while hopefully tempting them into downloading the full version of your app in the process. 

The first step to adding Android Instant App functionality to your project is to break your app into smaller modules, so users have the option to download a specific portion of your project. Since breaking your app into multiple, standalone modules isn’t exactly an easy task, Android Studio 3.0 Preview introduces a feature to help you modularize any class within your application: 

  • Open the class you want to modularize, and highlight the class name.
  • Control-click the class, and then select Refactor > Modularize.
Control-click a class and select Refactor  Modularize from the dropdown that appears
  • Select Preview to see the exact classes, methods and resources that are going to be incorporated into this new module.
  • If required, deselect some of the items you don’t want to include in this module. If you do remove one or more items, then you’ll typically need to spend some time adjusting the resulting module’s code, to make sure it functions correctly.
  • When you’re happy with your selection, go ahead and create your module by clicking OK.

Improved Java 8 Support 

Android Studio 3.0 Preview 1 provides built-in support for a subset of Java 8 language features and the third-party libraries that use them, specifically:

  • Lambda expressions
  • Method References
  • Type Annotations
  • Default and static interface methods
  • Repeating annotations

In addition, the following Java 8 features are compatible with API level 24 and higher: 

  • java.lang.annotation.Repeatable
  • java.util.function
  • java.lang.reflect.Method.isDefault()
  • java.lang.FunctionalInterface
  • java.util.stream
  • annotatedElement.getAnnotationsByType(Class)

To take advantage of this improved Java 8 support, you’ll need to update to version 3.0.0-alpha1 (or higher) of the Gradle plugin. Start by opening your gradle-wrapper.properties file and updating the distributionUrl:

Next, open your project-level build.gradle file and make sure you’re using Google’s new Maven repository. You’ll also need to update to version 3.0.0-alpha1 of the Gradle plugin: 

If you’ve previously enabled the Jack compiler, then you’ll need to disable it in order to take advantage of Android Studio’s improved Java 8 support. To remove Jack, open your module-level build.gradle file and delete the jackOptions block:

Finally, you’ll need to add the Java 8 compileOptions block to your build.gradle file, if you haven’t already:  

Custom Fonts Made Even Easier

Google is about to make it much easier to add custom fonts to your app, by upgrading fonts to a fully-supported resource type in Android O. We’ve already explored working with custom fonts in detail, but the Android Studio 3.0 Preview adds a handy feature that makes it even easier to browse for custom fonts and add them to your project:

  • Open any layout resource file that contains a TextView.
  • Select the Design tab.
  • In the layout editor, select the TextView. The Properties menu should open along the left-hand side of the Android Studio window.
  • Scroll to the menu’s textAppearance section, and then click its accompanying arrow icon to expand this section. Open the fontFamily dropdown, and select More fonts. This opens a window where you can browse a library of fonts that are available to download.
In the Properties menu expand the textAppearance section and then open the subsequent fontFamily dropdown
  • To add a font to your project, select it and then click OK
  • Open your project’s res/font folder, and you’ll see that this font has been added to your project, ready for you to use.
  • To apply this font to any piece of text, simply add the attribute android:fontFamily="@font/name-of-your-font.” 

Other Notable Updates

Android Studio 3.0 Preview also introduces some useful new tools: 

APK Debugger

This tool makes it easier to profile and debug APKs—simply select File > Profile or debug APK from the Android Studio toolbar, and then select the APK you want to take a closer look at. Alternatively, select Profile or debug APK from Android Studio’s Welcome screen.

Device File Explorer

You can use this tool to interact with the connected device’s file system, allowing you to view, copy and delete files, and also upload files to your Android device. To use this tool, either select the Device File Explorer tab towards the bottom-right of the Android Studio screen, or select View > Tool Windows > Device File Explorer from the Android Studio toolbar.

Adaptive Icon Wizard

In Android O, Original Equipment Manufacturers will be able to apply a mask to all the application launcher icons across their device. To make sure your launcher icon displays correctly regardless of the mask being used, you’ll need to provide an adaptive launcher icon. 

We’ve explored creating adaptive icons previously, but the new Android Studio Preview introduces a dedicated wizard that makes it easier to build these adaptive icons. To launch the wizard, Control-click your project’s res folder and select New > Image Asset. In the window that appears, open the Icon type dropdown, and set it to Launcher Icons (Adaptive and Legacy). You can then build your adaptive icon by selecting a foreground and background layer.

Android Studios Image Asset window will walk you through the process of creating an adaptive icon

Conclusion

In this tip, we explored some of the most exciting new tools and features already available in Android Studio 3.0 Preview, including built-in support for the Kotlin programming language, improved Java 8 support, and the all-new Android Profiler. With all the new features and tools available, Android app development is about to get even more exciting!

While you're here, check out some of our other tutorials and our video courses on Android app development!

2017-06-08T10:43:31.000Z2017-06-08T10:43:31.000ZJessica Thornsby

Mobile Development Platforms

$
0
0

When you're first getting started with mobile development, it can be hard to choose a platform. Even worse, each platform has its own set of languages and tools to choose from. So how can you decide?

This tutorial will help you pick a suitable mobile development platform so that you can jump in and start coding apps.

Platforms and Their Market Shares

A platform—an ecosystem of mobile devices, toolkits, and apps—is usually defined by its operating system (OS). The platform vendors are large companies such as Google, Apple, Microsoft, etc. Each one has developed an OS which they license to device manufacturers. Sometimes, they may be also device manufacturers themselves. The device manufacturers design and build the devices (mostly smartphones and tablets) with the relevant OS pre-installed. These devices are then sold to consumers (the users). 

Some basic apps, developed by the device manufacturer or even the platform vendor, may come pre-installed on the device. However, the vendor and the device manufacturer alone are unable to cater to the ever-growing needs of the users of these devices. So they rely on "third-party" developers—like you!—to fill in the gap of supply and demand. 

To support the developers who want to sell apps for the platform, they publish SDKs, APIs and other tools to make app development easier. Also, an official "app store" may be launched, to which the developers can publish their apps and from which the consumers can browse and download them. Thus, a whole app ecosystem is built around the platform.

Currently, Android OS by Google has the largest global market share, with a whopping 86.1%. Apple's iOS has 13.7% and holds the second position. The remaining portion of around 0.2% is shared by all the other vendors combined. This includes Windows Mobile by Microsoft, BlackBerry OS, Tizen OS, Sailfish OS, and Ubuntu Touch. 

The global composition may change significantly in certain countries. For example, in the United States, Android's market share is 53.4% and iOS share is 44.5%, a notable difference when compared with the global market share. If you have a specific market in mind, it would be a good idea to research that target demographic to find out which platform they are likely to use!

CPU Architecture

While all these platforms support ARM CPU architecture, Android extends its support even further, covering x86 and MIPS architectures. Tizen, Sailfish OS, and Ubuntu Touch also support x86 architecture. However, unless you are programming custom ROM chips, the CPU architectures supported by each platform won't affect your choice.

How to Choose a Platform

Your Familiarity With Programming Languages

You'll have to write lots of code as a mobile app developer. There are certain tools that allow you to avoid typing in a text-based code editor—for example, both Android and iOS have drag-and-drop tools for building graphical user interfaces. These won't let you fully use the platform features and capabilities, though. To build complex apps, you'll have to learn to the programming language for your platform.

So you might have to put in some effort to learn a new programming language or master one you already know. All the major platforms use popular programming languages with large developer communities. So make sure you leverage your knowledge and skills in those languages.

If you are familiar with Java, you'll find it easier to develop Android apps with Android SDK (Software Development Kit). However, some advanced app features will require you to use C and C++ skills, with the Android NDK (Native Development Kit). Also, it is now possible to program for Android with alternative languages such as Kotlin.

iOS requires Objective-C or Swift programming skills. 

If you are a fan of Microsoft's Visual Studio .NET, you'll be happy developing Windows Mobile apps with C#. 

To develop for BlackBerry or Tizen, just learn HTML, CSS, and JavaScript. 

Master QML or Python, and you'd ready to develop Sailfish apps. 

If you love Ubuntu Touch, you should learn QML or HTML and JavaScript. 

Finally, if you already know the web languages such as JavaScript and CSS, you can use these to develop for any platform, with a mobile cross-platform framework.

New programming languages are constantly being proposed and promoted. While it's not always clear whether these new languages have a big advantage over the old ones, it's a good idea to stay tuned for the latest trends. Some programming languages may become obsolete with the introduction of the new ones—Swift is a replacement for Objective-C, for example—and some old languages may find renewed existence, with totally new uses.

Native vs. Hybrid Development

You have two options when it comes to developing smartphone apps: native apps and hybrid. A native app published on one platform won't run on another platform. For example, you can't install an Android native app on an iOS device. You need to publish a separate platform-specific version for that, built using the appropriate development and deployment tools, targeting iOS. By contrast, a hybrid app is a web app developed with HTML, CSS, and JavaScript, and wrapped in a native app shell (or UI). 

What's great about native apps is that they are superior in performance, and they fully utilize the device's capabilities. Also, they are more secure. The downside is that the developer has to maintain several codebases for each of the platforms. Since hybrid apps have only moderate access to native APIs, their performance and the level of user experience (UX) is somewhat lagging behind. The key advantage of the hybrid apps is that the developer could publish for multiple platforms with the same codebase.

If you want to build native apps, you might want to check out our comprehensive courses on getting started coding apps for Android or iOS.

Ionic 2 is a popular framework for developing cross-platform hybrid mobile apps. It is based on the Angular 2 web framework. If you want to learn more, check out some of our courses or tutorials.

Native Cross-Platform Apps

Recently, a new batch of mobile cross-platform frameworks have emerged. These combine the best features of native apps and hybrid apps—they're fast and light and can access the full power of the native device, but they also are coded with JavaScript and other web languages, so a lot of code can be reused between platforms.

React Native and NativeScript are popular native cross-platform frameworks. If you want to learn more about these, check out our comprehensive beginner course or some of our many tutorials.

Your Ability to Learn

If you're a fast learner, then you could easily master the native development track. You'll need to understand basic programming concepts such as object-oriented programming (OOP), and you have to learn to be comfortable with the platform-specific technical concepts—such as Application Lifecycle Management in Android, for example. 

To get started, just download the necessary tools and SDKs from the platform vendor and give it a try. Most of these tools are open source, and there are plenty of code samples and app templates bundled with them to help you get started fast. 

If you are a web developer and want to explore the smartphone development space, then you might be more comfortable starting with hybrid or cross-platform native development. 

System Setup & Ease of Coding

Another factor to consider is the OS platform, and sometimes the hardware setup of the development computer. 

If you want to develop native iOS apps, you won't be able to do so on a normal Windows computer. You need a Mac with macOS, and Xcode, Apple's IDE for iOS development. Similarly, Ubuntu Touch native apps are best developed with an Ubuntu computer. While Android SDK runs on all three major desktop OS platforms, it's always advisable to check if your system meets the recommended specs before you start developing.

App Store Policies & Revenue Sharing

Both Google's and Apple's app stores charge a nominal registration fee. Although both offer the same revenue sharing percentage (70% of sale price at present), developers can, in theory, earn more revenue by publishing on Apple's. That's because the number of apps is relatively smaller and there's less competition among similar apps. Also, you need to be aware that not all geographic locations are allowed to publish paid apps on Google's Play Store. So you must think of a method of monetizing your app in advance.

Besides selling the app itself, another way to monetize your app is by displaying ads or by offering to unlock additional features.

Your Target Audience

This is an important factor because the success of your app depends on how well you address your audience, or how effectively you solve their problems. While most smartphone users tend to belong to younger generations, there are smartphone apps dedicated to older people and disabled people. Since the platform market share changes by country and age group, it's a good idea to research the platform demographics, if you want to target a specific demographic. 

Supported Device Features by Platform

Not all the devices support every feature of a platform. On the one hand, there are high-end devices, often dubbed "flagship products", supporting most of the features. On the other hand, there are low-cost, entry-level devices, supporting only the basic features. Then, there's everything in between. 

So you need to be very selective and make informed decisions when you develop apps. Developing an app that needs features supported only by high-end devices might severely affect your app's sales.

On the other hand, this could be an opportunity, if you can offer users of an advanced device an app feature that is not available elsewhere.

Conclusion

In this post, I've looked at the all the main mobile development platforms and tried to give you some guidance to help you choose between them. Applying these insights will surely help you become successful in your app development business.

2017-06-12T13:00:00.000Z2017-06-12T13:00:00.000ZBala Durage Sandamal Siripathi

Mobile Development Platforms

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-28944

When you're first getting started with mobile development, it can be hard to choose a platform. Even worse, each platform has its own set of languages and tools to choose from. So how can you decide?

This tutorial will help you pick a suitable mobile development platform so that you can jump in and start coding apps.

Platforms and Their Market Shares

A platform—an ecosystem of mobile devices, toolkits, and apps—is usually defined by its operating system (OS). The platform vendors are large companies such as Google, Apple, Microsoft, etc. Each one has developed an OS which they license to device manufacturers. Sometimes, they may be also device manufacturers themselves. The device manufacturers design and build the devices (mostly smartphones and tablets) with the relevant OS pre-installed. These devices are then sold to consumers (the users). 

Some basic apps, developed by the device manufacturer or even the platform vendor, may come pre-installed on the device. However, the vendor and the device manufacturer alone are unable to cater to the ever-growing needs of the users of these devices. So they rely on "third-party" developers—like you!—to fill in the gap of supply and demand. 

To support the developers who want to sell apps for the platform, they publish SDKs, APIs and other tools to make app development easier. Also, an official "app store" may be launched, to which the developers can publish their apps and from which the consumers can browse and download them. Thus, a whole app ecosystem is built around the platform.

Currently, Android OS by Google has the largest global market share, with a whopping 86.1%. Apple's iOS has 13.7% and holds the second position. The remaining portion of around 0.2% is shared by all the other vendors combined. This includes Windows Mobile by Microsoft, BlackBerry OS, Tizen OS, Sailfish OS, and Ubuntu Touch. 

The global composition may change significantly in certain countries. For example, in the United States, Android's market share is 53.4% and iOS share is 44.5%, a notable difference when compared with the global market share. If you have a specific market in mind, it would be a good idea to research that target demographic to find out which platform they are likely to use!

CPU Architecture

While all these platforms support ARM CPU architecture, Android extends its support even further, covering x86 and MIPS architectures. Tizen, Sailfish OS, and Ubuntu Touch also support x86 architecture. However, unless you are programming custom ROM chips, the CPU architectures supported by each platform won't affect your choice.

How to Choose a Platform

Your Familiarity With Programming Languages

You'll have to write lots of code as a mobile app developer. There are certain tools that allow you to avoid typing in a text-based code editor—for example, both Android and iOS have drag-and-drop tools for building graphical user interfaces. These won't let you fully use the platform features and capabilities, though. To build complex apps, you'll have to learn to the programming language for your platform.

So you might have to put in some effort to learn a new programming language or master one you already know. All the major platforms use popular programming languages with large developer communities. So make sure you leverage your knowledge and skills in those languages.

If you are familiar with Java, you'll find it easier to develop Android apps with Android SDK (Software Development Kit). However, some advanced app features will require you to use C and C++ skills, with the Android NDK (Native Development Kit). Also, it is now possible to program for Android with alternative languages such as Kotlin.

iOS requires Objective-C or Swift programming skills. 

If you are a fan of Microsoft's Visual Studio .NET, you'll be happy developing Windows Mobile apps with C#. 

To develop for BlackBerry or Tizen, just learn HTML, CSS, and JavaScript. 

Master QML or Python, and you'd ready to develop Sailfish apps. 

If you love Ubuntu Touch, you should learn QML or HTML and JavaScript. 

Finally, if you already know the web languages such as JavaScript and CSS, you can use these to develop for any platform, with a mobile cross-platform framework.

New programming languages are constantly being proposed and promoted. While it's not always clear whether these new languages have a big advantage over the old ones, it's a good idea to stay tuned for the latest trends. Some programming languages may become obsolete with the introduction of the new ones—Swift is a replacement for Objective-C, for example—and some old languages may find renewed existence, with totally new uses.

Native vs. Hybrid Development

You have two options when it comes to developing smartphone apps: native apps and hybrid. A native app published on one platform won't run on another platform. For example, you can't install an Android native app on an iOS device. You need to publish a separate platform-specific version for that, built using the appropriate development and deployment tools, targeting iOS. By contrast, a hybrid app is a web app developed with HTML, CSS, and JavaScript, and wrapped in a native app shell (or UI). 

What's great about native apps is that they are superior in performance, and they fully utilize the device's capabilities. Also, they are more secure. The downside is that the developer has to maintain several codebases for each of the platforms. Since hybrid apps have only moderate access to native APIs, their performance and the level of user experience (UX) is somewhat lagging behind. The key advantage of the hybrid apps is that the developer could publish for multiple platforms with the same codebase.

If you want to build native apps, you might want to check out our comprehensive courses on getting started coding apps for Android or iOS.

Ionic 2 is a popular framework for developing cross-platform hybrid mobile apps. It is based on the Angular 2 web framework. If you want to learn more, check out some of our courses or tutorials.

Native Cross-Platform Apps

Recently, a new batch of mobile cross-platform frameworks have emerged. These combine the best features of native apps and hybrid apps—they're fast and light and can access the full power of the native device, but they also are coded with JavaScript and other web languages, so a lot of code can be reused between platforms.

React Native and NativeScript are popular native cross-platform frameworks. If you want to learn more about these, check out our comprehensive beginner course or some of our many tutorials.

Your Ability to Learn

If you're a fast learner, then you could easily master the native development track. You'll need to understand basic programming concepts such as object-oriented programming (OOP), and you have to learn to be comfortable with the platform-specific technical concepts—such as Application Lifecycle Management in Android, for example. 

To get started, just download the necessary tools and SDKs from the platform vendor and give it a try. Most of these tools are open source, and there are plenty of code samples and app templates bundled with them to help you get started fast. 

If you are a web developer and want to explore the smartphone development space, then you might be more comfortable starting with hybrid or cross-platform native development. 

System Setup & Ease of Coding

Another factor to consider is the OS platform, and sometimes the hardware setup of the development computer. 

If you want to develop native iOS apps, you won't be able to do so on a normal Windows computer. You need a Mac with macOS, and Xcode, Apple's IDE for iOS development. Similarly, Ubuntu Touch native apps are best developed with an Ubuntu computer. While Android SDK runs on all three major desktop OS platforms, it's always advisable to check if your system meets the recommended specs before you start developing.

App Store Policies & Revenue Sharing

Both Google's and Apple's app stores charge a nominal registration fee. Although both offer the same revenue sharing percentage (70% of sale price at present), developers can, in theory, earn more revenue by publishing on Apple's. That's because the number of apps is relatively smaller and there's less competition among similar apps. Also, you need to be aware that not all geographic locations are allowed to publish paid apps on Google's Play Store. So you must think of a method of monetizing your app in advance.

Besides selling the app itself, another way to monetize your app is by displaying ads or by offering to unlock additional features.

Your Target Audience

This is an important factor because the success of your app depends on how well you address your audience, or how effectively you solve their problems. While most smartphone users tend to belong to younger generations, there are smartphone apps dedicated to older people and disabled people. Since the platform market share changes by country and age group, it's a good idea to research the platform demographics, if you want to target a specific demographic. 

Supported Device Features by Platform

Not all the devices support every feature of a platform. On the one hand, there are high-end devices, often dubbed "flagship products", supporting most of the features. On the other hand, there are low-cost, entry-level devices, supporting only the basic features. Then, there's everything in between. 

So you need to be very selective and make informed decisions when you develop apps. Developing an app that needs features supported only by high-end devices might severely affect your app's sales.

On the other hand, this could be an opportunity, if you can offer users of an advanced device an app feature that is not available elsewhere.

Conclusion

In this post, I've looked at the all the main mobile development platforms and tried to give you some guidance to help you choose between them. Applying these insights will surely help you become successful in your app development business.

2017-06-12T13:00:00.000Z2017-06-12T13:00:00.000ZBala Durage Sandamal Siripathi

SpriteKit: Actions and Physics

$
0
0

In this series, we're learning how to use SpriteKit to build 2D games for iOS. In this post, we'll learn about two important features of SpriteKit: actions and physics.

To follow along with this tutorial, just download the accompanying GitHub repo. It has two folders: one for actions and one for physics. Just open either starter project in Xcode and you're all set.

Actions

For most games, you'll want nodes to do something like move, scale, or rotate. The SKAction class was designed with this purpose in mind. The SKAction class has many class methods that you can invoke to move, scale, or rotate a node's properties over a period of time. 

You can also play sounds, animate a group of textures, or run custom code using the SKAction class. You can run a single action, run two or more actions one after another in a sequence, run two or more actions at the same time together as a group, and even repeat any actions.

Motion

Let's get a node moving across the screen. Enter the following within Example1.swift.

Here we create an SKAction and invoke the class method moveTo(y:duration:), which takes as a parameter the y position to move the node to and the duration in seconds. To execute the action, you must call a node's run(_:) method and pass in the SKAction. If you test now, you should see an airplane move up the screen.

There are several varieties of the move methods, including move(to:duration:), which will move the node to a new position on both the x and y axis, and move(by:duration:), which will move a node relative to its current position. I suggest you read through the documentation on SKAction to learn about all of the varieties of the move methods.

Completion Closures

There is another variety of the run method that allows you to call some code in a completion closure. Enter the following code within Example2.swift.

The run(_:completion:) method allows you to run a block of code once the action has fully completed executing. Here we execute a simple print statement, but the code could be as complex as you need it to be.

Sequences of Actions

Sometimes you'll want to run actions one after another, and you can do this with the sequence(_:) method. Add the following to Example3.swift.

Here we create two SKActions: one uses the moveTo(y:duration:), and the other uses the scale(to:duration:), which changes the x and y scale of the node. We then invoke the sequence(_:) method, which takes as a parameter an array of SKActions to be run one after the other. If you test now, you should see the plane move up the screen, and once it has reached its destination, it will then grow to three times its original size.

Grouped Actions

At other times, you may wish to run actions together as a group. Add the following code to Example4.swift.

Here we are using the same moveTo and scale methods as the previous example, but we are also invoking the group(_:) method, which takes as a parameter an array of SKActions to be run at the same time. If you were to test now, you would see that the plane moves and scales at the same time.

Reversing Actions

Some of these actions can be reversed by invoking the reversed() method. The best way to figure out which actions support the reversed() method is to consult the documentation. One action that is reversible is the fadeOut(withDuration:), which will fade a node to invisibility by changing its alpha value. Let's get the plane to fade out and then fade back in. Add the following to Example5.swift.

Here we create a SKAction and invoke the fadeOut(withDuration:) method. In the next line of code, we invoke the reversed() method, which will cause the action to reverse what it has just done. Test the project, and you will see the plane fade out and then fade back in.

Repeating Actions

If you ever need to repeat an action a specific number of times, the repeat(_:count:) and repeatForever(_:) methods have you covered. Let's make the plane repeatedly fade out and then back in forever. Enter the following code in Example6.swift.

Here we invoke the repeatForever(_:) method, passing in the fadePlayerSequence. If you test, you will see the plane fades out and then back in forever.

Stopping Actions

Many times you'll need to stop a node from running its actions. You can use the removeAllActions() method for this. Let's make the player node stop fading when we touch on the screen. Add the following within Example7.swift.

If you touch on the screen, the player node will have all actions removed and will no longer fade in and out.

Keeping Track of Actions

Sometimes you need a way to keep track of your actions. For example, if you run two or more actions on a node, you may want a way to identify them. You can do this by registering a key with the node, which is a simple string of text. Enter the following within the Example8.swift.

Here we are invoking the node's run(_:withKey:) method, which, as mentioned, takes a simple string of text. Within the touchesBegan(_:with:) method, we are invoking  action(forKey:) to make sure the node has the key we assigned. If it does, we invoke .removeAction(forKey:), which takes as a parameter the key you have previously set.

Sound Actions

A lot of times you'll want to play some sound in your game. You can accomplish this using the class method playSoundFileNamed(_:waitForCompletion:). Enter the following within Example9.swift.

The playSoundFileNamed(_:waitForCompletion:) takes as parameters the name of the sound file without the extension, and a boolean that determines whether the action will wait until the sound is complete before moving on. 

For example, suppose you had two actions in a sequence, with the sound being the first action. If waitForCompletion was true then the sequence would wait until that sound was finished playing before moving to the next action within the sequence. If you need more control over your sounds, you can use an SKAudioNode. We will not be covering the SKAudioNode in this series, but is definitely something you should take a look at during your career as a SpriteKit developer.

Frame Animation

Animating a group of images is something that many games call for. The animate(with:timePerFrame:) has you covered in those cases. Enter the following within Example10.swift.

The animate(with:timePerFrame:) takes as a parameter an array of SKTextures, and a timePerFrame value which will be how long it takes between each texture change. To execute this action, you invoke a node's run method and pass in the SKAction.

Custom Code Actions

The last type of action we will look at is one that lets you run custom code. This could come in handy when you need to do something in the middle of your actions, or just need a way to execute something that the SKAction class does not provide for. Enter the following within Example11.swift.

Here we invoke the scene's run(_:) method and pass a function printToConsole() as a parameter. Remember that scenes are nodes too, so you can invoke the run(_:) method on them as well.

This concludes our study of actions. There is a lot you can do with the SKAction class, and I would suggest after reading this tutorial that you further explore the documentation on SKActions.

Physics

SpriteKit offers a robust physics engine out of the box, with little setup required. To get started, you just add a physics body to each of your nodes and you're good to go. The physics engine is built on top of the popular Box2d engine. SpriteKit's API is much easier to use than the original Box2d API, however.

Let's get started by adding a physics body to a node and see what happens. Add the following code to Example1.swift.

Go ahead and test the project now. You will see the plane sitting at the top of the scene. Once you press on the screen, the plane will fall off the screen, and it will keep falling forever. This shows how simple it is to get started using physics—just add a physics body to a node and you are all set. 

The physicsBody Shape

The physicsBody property is of type SKPhysicsBody, which is going to be a rough outline of your node's shape... or a very precise outline of your node's shape, depending on which constructor you use to initialize this property. 

Here we have used the init(circleOfRadius:) initializer, which takes as a parameter the radius of the circle. There are several other initializers, including one for a rectangle or a polygon from a CGPath. You can even use the node's own texture, which would make the physicsBody a near exact representation of the node. 

To see what I mean, update the GameViewController.swift file with the following code. I have commented the line to be added.

Now the node's physicsBody will be outlined in green. In collision detection, the shape of physicsBody is what is evaluated. This example would have the circle around the plane guiding the collision detection, meaning that if a bullet, for example, were to hit the outer edge of the circle, then that would count as a collision.

circle body

Now add the following to Example2.swift.

Here we are using the sprite's texture. If you test the project now, you should see the outline has changed to a near exact representation of the sprite's texture.

texture body

Gravity

We set physicsBody's affectedByGravity property to false in the previous examples. As soon as you add a physics body to a node, the physics engine will take over. The result is that the plane falls immediately when the project is run! 

You can also set the gravity on a per node basis, as we have here, or you can turn off gravity altogether. Add the following to Example3.swift.

We can set the gravity using the physicsWorldgravity property. The gravity property is of type CGVector. We set both the dx and dy components to 0, and then when the screen is touched we set the dy property to -9.8. The components are measured in meters, and the default is (0, -9.8), which represents Earth’s gravity.

Edge Loops

As it stands now, any nodes added to the scene will just fall off the screen forever. We can add an edge loop around the scene using the init(edgeLoopFrom:) method. Add the following to Example4.swift.

Here we have added a physics body to the scene itself. The init(edgeLoopFrom:) takes as a parameter a CGRect that defines its edges. If you test now, you will see that the plane still falls; however, it interacts with this edge loop and no longer falls out of the scene. It also bounces and even turns a little on its side. This is the power of the physics engine—you get all this functionality out of the box for free. Writing something like this on your own would be quite complex.

Bounciness

We have seen that the plane bounces and turns on its side. You can control the bounciness and whether the physics body allows rotation. Enter the following into Example5.swift.

If you test now, you'll see that player is very bouncy and takes a few seconds to settle down. You will also notice that it no longer rotates. The restitution property takes a number from 0.0 (less bouncy) to 1.0 (very bouncy), and the allowsRotation property is a simple boolean.

Friction

In the real world, when two objects move against each other, there is a bit of friction between them. You can change the amount of friction a physics body has—this equates to the “roughness” of the body. This property must be between 0.0 and 1.0. The default is 0.2. Add the following to Example6.swift.

Here we create a rectangular Sprite and set the friction property on its physicsBody to 0.0. If you test now, you will see the plane very quickly glides down the rotated rectangle. Now change the friction property to 1.0 and test again. You will see the plane does not glide down the rectangle quite as fast. This is because of the friction. If you wanted it to move even more slowly, you could apply more friction to the player's physicsBody (remember the default is 0.2).

Density and Mass

There are a couple of other properties that you can change on the physics body, such as density and mass. The density and mass properties are interrelated, and when you change one, the other is automatically recalculated. When you first create a physics body, the body's area property is calculated and never changes afterward (it is read only). The density and mass are based on the formula mass = density * area.

When you have more than one node in a scene, the density and mass would affect the simulation of how the nodes bounce off each other and interact. Think of a basketball and a bowling ball—they're roughly the same size, but a bowling ball is much denser. When they collide, the basketball will change direction and velocity much more than the bowling ball.

Force and Impulse

You can apply forces and impulses to move the physics body. An impulse is applied immediately and only one time. A force, on the other hand, is usually applied for a continuous effect. The force is applied from the time you add the force until the next frame of the simulation is processed. To apply a continuous force, you would need to apply it on each frame. Add the following to Example7.swift.

Run the project and wait till the player comes to rest on the bottom of the screen, and then tap on the player. You will see the player fly up the screen and eventually come to rest again at the bottom. We apply an impulse using the method applyImpulse(_:), which takes as a parameter a CGVector and is measured in Newton-seconds. 

Why not try the opposite and add a force to the player node? Remember you will need to add the force continuously for it to have the desired effect. One good place to do that is in the scene's update(_:) method. Also, you may want to try increasing the restitution property on the player to see how it affects the simulation.

Collision Detection

The physics engine has a robust collision and contact detection system. By default, any two nodes with physics bodies can collide. You have seen this in previous examples—no special code was required to tell the objects to interact. However, you can change this behaviour by setting a "category" on the physics body. This category can then be used to determine what nodes will collide with each other and also can be used to inform you when certain nodes are making contact.

The difference between a contact and a collision is that a contact is used to tell when two physics bodies are touching each other. A collision, on the other hand, prevents two physics bodies from crossing into each other's space—when the physics engine detects a collision, it will apply opposing impulses to bring the objects apart again. We have seen collisions in action with the player and the edge loop and the player and the rectangle from the previous examples.

Types of physicsBodies

Before we move on to setting up our Categories for the physics bodies, we should talk about the types of physicsBodies. There are three:

  1. A dynamic volume simulates objects with volume and mass. These objects are affected by forces and collisions in the physics world (e.g. the airplane in previous examples).
  2. A static volume is not affected by forces and collisions. However, because it does have volume itself, other bodies can bounce off and interact with it. You set the physics body's isDynamic property to false to create a static volume. These volumes are never moved by the physics engine. We saw this in action earlier with example six, where the airplane interacted with the rectangle, but the rectangle was not affected by the plane or by gravity. To see what I mean, go back to example six and remove the line of code which sets rectangle.physicsBody?.isDynamic = false.
  3. The third type of physics body is an edge, which is a static, volume-less body. We have seen this type of body in action with the edge loop we created around the scene in all the previous examples. Edges interact with other volume-based bodies, but never with another edge.

The categories use a 32-bit integer with 32 individual flags that can be either on or off. This also means you can only have a maximum of 32 categories. This should not present a problem for most games, but it is something to keep in mind.

Creating Categories

Create a new Swift file by going to File> New> File and making sure Swift File is highlighted.

New File

Enter PhysicsCategories as the name and press Create.

Physics Categories

Enter the following into the file you've just created.

We use a structure PhysicsCategories to create categories for Player, EdgeLoop, and RedBall. We are using bit shifting to turn the bits on.

Now enter the following in Example8.swift.

Here we create the player as usual, and create two variables dx and dy, which will be used as the components of a CGVector when we apply an impulse to the player.

Inside didMove(to:), we set up the player and add the categoryBitMask, contactBitMask, and collisionBitMask. The categoryBitMask should make sense—this is the player, so we set it to PhysicsCategories.Player. We are interested in when the player makes contact with the redBall, so we set the contactBitMask to PhysicsCategories.RedBall. Lastly, we want it to collide with and be affected by physics with the edge loop, so we set its collisionBitMask to PhysicsCategories.EdgeLoop. Finally, we apply an impulse to get it moving.

On the redBall, we just set its categoryBitMask. With the edgeLoop, we set its categoryBitMask, and because we are interested in when the player makes contact with it, we set its contactBitMask.

When setting up the contactBitMask and collisionBitMask, only one of the bodies needs to reference the other. In other words, you do not need to set up both bodies as contacting or colliding with the other.

For the edgeLoop, we set it to contact with the player. However, we could have instead set up the player to interact with the edgeLoop by using the bitwise or (|) operator. Using this operator, you can set up multiple contact or collision bit masks. For example:

To be able to respond when two bodies make contact, you have to implement the SKPhysicsContactDelegate protocol. You may have noticed this in the example code.

To respond to contact events, you can implement the didBegin(_:) and didEnd(_:) methods. They will be called when the two objects have begun making contact and when they have ended contact respectively. We'll stick with the didBegin(_:) method for this tutorial.

Here is the code once again for the didBegin(_:) method.

First, we set up two variables firstBody and secondBody. The two bodies in the contact parameter are not passed in a guaranteed order, so we'll use an if statement to determine which body has a lower contactBitMask and set that to firstBody.

We can now check and see which physics bodies are making contact. We check to see which physics bodies we are dealing with by anding (&&) the bodies' categoryBitMask with the PhysicsCategorys we set up previously, and if the result is non-zero we know we have the right body.

Finally, we print which bodies are making contact. If it was the player and edgeLoop, we also invert the dx and dy properties and apply an impulse to the player. This keeps the player constantly moving.

This concludes our study of SpriteKit's physics engine. There is a lot that was not covered such as SKPhysicsJoint, for example. The physics engine is very robust, and I highly suggest you read through all the various aspects of it, starting with SKPhysicBody.

Conclusion

In this post we learned about actions and physics—two very important parts of the SpriteKit framework. We looked at a lot of examples, but there is still a lot you can do with actions and physics, and the documentation is a great place to learn. 

In the next and final part of this series, we'll put together everything we have learned by making a simple game. Thanks for reading, and I will see you there!

In the meantime, check out some of our comprehensive courses on Swift and SpriteKit development!


2017-06-13T10:00:00.000Z2017-06-13T10:00:00.000ZJames Tyner

SpriteKit: Actions and Physics

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-28961

In this series, we're learning how to use SpriteKit to build 2D games for iOS. In this post, we'll learn about two important features of SpriteKit: actions and physics.

To follow along with this tutorial, just download the accompanying GitHub repo. It has two folders: one for actions and one for physics. Just open either starter project in Xcode and you're all set.

Actions

For most games, you'll want nodes to do something like move, scale, or rotate. The SKAction class was designed with this purpose in mind. The SKAction class has many class methods that you can invoke to move, scale, or rotate a node's properties over a period of time. 

You can also play sounds, animate a group of textures, or run custom code using the SKAction class. You can run a single action, run two or more actions one after another in a sequence, run two or more actions at the same time together as a group, and even repeat any actions.

Motion

Let's get a node moving across the screen. Enter the following within Example1.swift.

Here we create an SKAction and invoke the class method moveTo(y:duration:), which takes as a parameter the y position to move the node to and the duration in seconds. To execute the action, you must call a node's run(_:) method and pass in the SKAction. If you test now, you should see an airplane move up the screen.

There are several varieties of the move methods, including move(to:duration:), which will move the node to a new position on both the x and y axis, and move(by:duration:), which will move a node relative to its current position. I suggest you read through the documentation on SKAction to learn about all of the varieties of the move methods.

Completion Closures

There is another variety of the run method that allows you to call some code in a completion closure. Enter the following code within Example2.swift.

The run(_:completion:) method allows you to run a block of code once the action has fully completed executing. Here we execute a simple print statement, but the code could be as complex as you need it to be.

Sequences of Actions

Sometimes you'll want to run actions one after another, and you can do this with the sequence(_:) method. Add the following to Example3.swift.

Here we create two SKActions: one uses the moveTo(y:duration:), and the other uses the scale(to:duration:), which changes the x and y scale of the node. We then invoke the sequence(_:) method, which takes as a parameter an array of SKActions to be run one after the other. If you test now, you should see the plane move up the screen, and once it has reached its destination, it will then grow to three times its original size.

Grouped Actions

At other times, you may wish to run actions together as a group. Add the following code to Example4.swift.

Here we are using the same moveTo and scale methods as the previous example, but we are also invoking the group(_:) method, which takes as a parameter an array of SKActions to be run at the same time. If you were to test now, you would see that the plane moves and scales at the same time.

Reversing Actions

Some of these actions can be reversed by invoking the reversed() method. The best way to figure out which actions support the reversed() method is to consult the documentation. One action that is reversible is the fadeOut(withDuration:), which will fade a node to invisibility by changing its alpha value. Let's get the plane to fade out and then fade back in. Add the following to Example5.swift.

Here we create a SKAction and invoke the fadeOut(withDuration:) method. In the next line of code, we invoke the reversed() method, which will cause the action to reverse what it has just done. Test the project, and you will see the plane fade out and then fade back in.

Repeating Actions

If you ever need to repeat an action a specific number of times, the repeat(_:count:) and repeatForever(_:) methods have you covered. Let's make the plane repeatedly fade out and then back in forever. Enter the following code in Example6.swift.

Here we invoke the repeatForever(_:) method, passing in the fadePlayerSequence. If you test, you will see the plane fades out and then back in forever.

Stopping Actions

Many times you'll need to stop a node from running its actions. You can use the removeAllActions() method for this. Let's make the player node stop fading when we touch on the screen. Add the following within Example7.swift.

If you touch on the screen, the player node will have all actions removed and will no longer fade in and out.

Keeping Track of Actions

Sometimes you need a way to keep track of your actions. For example, if you run two or more actions on a node, you may want a way to identify them. You can do this by registering a key with the node, which is a simple string of text. Enter the following within the Example8.swift.

Here we are invoking the node's run(_:withKey:) method, which, as mentioned, takes a simple string of text. Within the touchesBegan(_:with:) method, we are invoking  action(forKey:) to make sure the node has the key we assigned. If it does, we invoke .removeAction(forKey:), which takes as a parameter the key you have previously set.

Sound Actions

A lot of times you'll want to play some sound in your game. You can accomplish this using the class method playSoundFileNamed(_:waitForCompletion:). Enter the following within Example9.swift.

The playSoundFileNamed(_:waitForCompletion:) takes as parameters the name of the sound file without the extension, and a boolean that determines whether the action will wait until the sound is complete before moving on. 

For example, suppose you had two actions in a sequence, with the sound being the first action. If waitForCompletion was true then the sequence would wait until that sound was finished playing before moving to the next action within the sequence. If you need more control over your sounds, you can use an SKAudioNode. We will not be covering the SKAudioNode in this series, but is definitely something you should take a look at during your career as a SpriteKit developer.

Frame Animation

Animating a group of images is something that many games call for. The animate(with:timePerFrame:) has you covered in those cases. Enter the following within Example10.swift.

The animate(with:timePerFrame:) takes as a parameter an array of SKTextures, and a timePerFrame value which will be how long it takes between each texture change. To execute this action, you invoke a node's run method and pass in the SKAction.

Custom Code Actions

The last type of action we will look at is one that lets you run custom code. This could come in handy when you need to do something in the middle of your actions, or just need a way to execute something that the SKAction class does not provide for. Enter the following within Example11.swift.

Here we invoke the scene's run(_:) method and pass a function printToConsole() as a parameter. Remember that scenes are nodes too, so you can invoke the run(_:) method on them as well.

This concludes our study of actions. There is a lot you can do with the SKAction class, and I would suggest after reading this tutorial that you further explore the documentation on SKActions.

Physics

SpriteKit offers a robust physics engine out of the box, with little setup required. To get started, you just add a physics body to each of your nodes and you're good to go. The physics engine is built on top of the popular Box2d engine. SpriteKit's API is much easier to use than the original Box2d API, however.

Let's get started by adding a physics body to a node and see what happens. Add the following code to Example1.swift.

Go ahead and test the project now. You will see the plane sitting at the top of the scene. Once you press on the screen, the plane will fall off the screen, and it will keep falling forever. This shows how simple it is to get started using physics—just add a physics body to a node and you are all set. 

The physicsBody Shape

The physicsBody property is of type SKPhysicsBody, which is going to be a rough outline of your node's shape... or a very precise outline of your node's shape, depending on which constructor you use to initialize this property. 

Here we have used the init(circleOfRadius:) initializer, which takes as a parameter the radius of the circle. There are several other initializers, including one for a rectangle or a polygon from a CGPath. You can even use the node's own texture, which would make the physicsBody a near exact representation of the node. 

To see what I mean, update the GameViewController.swift file with the following code. I have commented the line to be added.

Now the node's physicsBody will be outlined in green. In collision detection, the shape of physicsBody is what is evaluated. This example would have the circle around the plane guiding the collision detection, meaning that if a bullet, for example, were to hit the outer edge of the circle, then that would count as a collision.

circle body

Now add the following to Example2.swift.

Here we are using the sprite's texture. If you test the project now, you should see the outline has changed to a near exact representation of the sprite's texture.

texture body

Gravity

We set physicsBody's affectedByGravity property to false in the previous examples. As soon as you add a physics body to a node, the physics engine will take over. The result is that the plane falls immediately when the project is run! 

You can also set the gravity on a per node basis, as we have here, or you can turn off gravity altogether. Add the following to Example3.swift.

We can set the gravity using the physicsWorldgravity property. The gravity property is of type CGVector. We set both the dx and dy components to 0, and then when the screen is touched we set the dy property to -9.8. The components are measured in meters, and the default is (0, -9.8), which represents Earth’s gravity.

Edge Loops

As it stands now, any nodes added to the scene will just fall off the screen forever. We can add an edge loop around the scene using the init(edgeLoopFrom:) method. Add the following to Example4.swift.

Here we have added a physics body to the scene itself. The init(edgeLoopFrom:) takes as a parameter a CGRect that defines its edges. If you test now, you will see that the plane still falls; however, it interacts with this edge loop and no longer falls out of the scene. It also bounces and even turns a little on its side. This is the power of the physics engine—you get all this functionality out of the box for free. Writing something like this on your own would be quite complex.

Bounciness

We have seen that the plane bounces and turns on its side. You can control the bounciness and whether the physics body allows rotation. Enter the following into Example5.swift.

If you test now, you'll see that player is very bouncy and takes a few seconds to settle down. You will also notice that it no longer rotates. The restitution property takes a number from 0.0 (less bouncy) to 1.0 (very bouncy), and the allowsRotation property is a simple boolean.

Friction

In the real world, when two objects move against each other, there is a bit of friction between them. You can change the amount of friction a physics body has—this equates to the “roughness” of the body. This property must be between 0.0 and 1.0. The default is 0.2. Add the following to Example6.swift.

Here we create a rectangular Sprite and set the friction property on its physicsBody to 0.0. If you test now, you will see the plane very quickly glides down the rotated rectangle. Now change the friction property to 1.0 and test again. You will see the plane does not glide down the rectangle quite as fast. This is because of the friction. If you wanted it to move even more slowly, you could apply more friction to the player's physicsBody (remember the default is 0.2).

Density and Mass

There are a couple of other properties that you can change on the physics body, such as density and mass. The density and mass properties are interrelated, and when you change one, the other is automatically recalculated. When you first create a physics body, the body's area property is calculated and never changes afterward (it is read only). The density and mass are based on the formula mass = density * area.

When you have more than one node in a scene, the density and mass would affect the simulation of how the nodes bounce off each other and interact. Think of a basketball and a bowling ball—they're roughly the same size, but a bowling ball is much denser. When they collide, the basketball will change direction and velocity much more than the bowling ball.

Force and Impulse

You can apply forces and impulses to move the physics body. An impulse is applied immediately and only one time. A force, on the other hand, is usually applied for a continuous effect. The force is applied from the time you add the force until the next frame of the simulation is processed. To apply a continuous force, you would need to apply it on each frame. Add the following to Example7.swift.

Run the project and wait till the player comes to rest on the bottom of the screen, and then tap on the player. You will see the player fly up the screen and eventually come to rest again at the bottom. We apply an impulse using the method applyImpulse(_:), which takes as a parameter a CGVector and is measured in Newton-seconds. 

Why not try the opposite and add a force to the player node? Remember you will need to add the force continuously for it to have the desired effect. One good place to do that is in the scene's update(_:) method. Also, you may want to try increasing the restitution property on the player to see how it affects the simulation.

Collision Detection

The physics engine has a robust collision and contact detection system. By default, any two nodes with physics bodies can collide. You have seen this in previous examples—no special code was required to tell the objects to interact. However, you can change this behaviour by setting a "category" on the physics body. This category can then be used to determine what nodes will collide with each other and also can be used to inform you when certain nodes are making contact.

The difference between a contact and a collision is that a contact is used to tell when two physics bodies are touching each other. A collision, on the other hand, prevents two physics bodies from crossing into each other's space—when the physics engine detects a collision, it will apply opposing impulses to bring the objects apart again. We have seen collisions in action with the player and the edge loop and the player and the rectangle from the previous examples.

Types of physicsBodies

Before we move on to setting up our Categories for the physics bodies, we should talk about the types of physicsBodies. There are three:

  1. A dynamic volume simulates objects with volume and mass. These objects are affected by forces and collisions in the physics world (e.g. the airplane in previous examples).
  2. A static volume is not affected by forces and collisions. However, because it does have volume itself, other bodies can bounce off and interact with it. You set the physics body's isDynamic property to false to create a static volume. These volumes are never moved by the physics engine. We saw this in action earlier with example six, where the airplane interacted with the rectangle, but the rectangle was not affected by the plane or by gravity. To see what I mean, go back to example six and remove the line of code which sets rectangle.physicsBody?.isDynamic = false.
  3. The third type of physics body is an edge, which is a static, volume-less body. We have seen this type of body in action with the edge loop we created around the scene in all the previous examples. Edges interact with other volume-based bodies, but never with another edge.

The categories use a 32-bit integer with 32 individual flags that can be either on or off. This also means you can only have a maximum of 32 categories. This should not present a problem for most games, but it is something to keep in mind.

Creating Categories

Create a new Swift file by going to File> New> File and making sure Swift File is highlighted.

New File

Enter PhysicsCategories as the name and press Create.

Physics Categories

Enter the following into the file you've just created.

We use a structure PhysicsCategories to create categories for Player, EdgeLoop, and RedBall. We are using bit shifting to turn the bits on.

Now enter the following in Example8.swift.

Here we create the player as usual, and create two variables dx and dy, which will be used as the components of a CGVector when we apply an impulse to the player.

Inside didMove(to:), we set up the player and add the categoryBitMask, contactBitMask, and collisionBitMask. The categoryBitMask should make sense—this is the player, so we set it to PhysicsCategories.Player. We are interested in when the player makes contact with the redBall, so we set the contactBitMask to PhysicsCategories.RedBall. Lastly, we want it to collide with and be affected by physics with the edge loop, so we set its collisionBitMask to PhysicsCategories.EdgeLoop. Finally, we apply an impulse to get it moving.

On the redBall, we just set its categoryBitMask. With the edgeLoop, we set its categoryBitMask, and because we are interested in when the player makes contact with it, we set its contactBitMask.

When setting up the contactBitMask and collisionBitMask, only one of the bodies needs to reference the other. In other words, you do not need to set up both bodies as contacting or colliding with the other.

For the edgeLoop, we set it to contact with the player. However, we could have instead set up the player to interact with the edgeLoop by using the bitwise or (|) operator. Using this operator, you can set up multiple contact or collision bit masks. For example:

To be able to respond when two bodies make contact, you have to implement the SKPhysicsContactDelegate protocol. You may have noticed this in the example code.

To respond to contact events, you can implement the didBegin(_:) and didEnd(_:) methods. They will be called when the two objects have begun making contact and when they have ended contact respectively. We'll stick with the didBegin(_:) method for this tutorial.

Here is the code once again for the didBegin(_:) method.

First, we set up two variables firstBody and secondBody. The two bodies in the contact parameter are not passed in a guaranteed order, so we'll use an if statement to determine which body has a lower contactBitMask and set that to firstBody.

We can now check and see which physics bodies are making contact. We check to see which physics bodies we are dealing with by anding (&&) the bodies' categoryBitMask with the PhysicsCategorys we set up previously, and if the result is non-zero we know we have the right body.

Finally, we print which bodies are making contact. If it was the player and edgeLoop, we also invert the dx and dy properties and apply an impulse to the player. This keeps the player constantly moving.

This concludes our study of SpriteKit's physics engine. There is a lot that was not covered such as SKPhysicsJoint, for example. The physics engine is very robust, and I highly suggest you read through all the various aspects of it, starting with SKPhysicBody.

Conclusion

In this post we learned about actions and physics—two very important parts of the SpriteKit framework. We looked at a lot of examples, but there is still a lot you can do with actions and physics, and the documentation is a great place to learn. 

In the next and final part of this series, we'll put together everything we have learned by making a simple game. Thanks for reading, and I will see you there!

In the meantime, check out some of our comprehensive courses on Swift and SpriteKit development!


2017-06-13T10:00:00.000Z2017-06-13T10:00:00.000ZJames Tyner

Realm Mobile Database for iOS

$
0
0
Final product image
What You'll Be Creating

Introduction

In this tutorial, I'll show you how to use a powerful yet elegant on-device database solution for your iOS apps: Realm Mobile Database. An alternative to Apple Core Data or SQLite with object-relational mapping (ORM), Realm Mobile Database offers developers an easier and more natural way to store and query data.

What Is Realm Mobile Database?

Billed as a true object database, Realm differentiates itself from other similar libraries by treating data objects as live objects—meaning objects are automatically updated. They react responsively to changes and are easy to persist. Even better, you don’t have the steep learning curve that you would with Core Data or SQLite scripting. Instead, you can work in a truly object-oriented way. Realm Mobile Database has also been open-sourced as of 2016, and is available free of charge to developers.

In addition to Realm Mobile Database, the company also offers Realm Mobile Platform, its flagship PAAS to complement Realm Mobile Database with a server-side solution.

Realm Mobile Platform, extends that core with realtime data synchronization and event handling on the server side, all seamlessly connected to the apps. Developers use the platform to build apps with powerful functionality like messaging, collaboration, and offline-first capabilities. The platform is also ideal for mobilizing existing APIs, making it easy to build highly responsive and engaging apps connected to legacy systems and services. (realm.io)

So Realm Mobile Platform works on the server side in the same seamless way as Realm Mobile Database, providing automatic data synchronization and event handling between the client and server, and in the process abstracting away the complexities that arise when dealing with data synchronization. Realm Mobile Platform is beyond the scope of this tutorial, but I'll come back to it in a future post.

Why Realm Mobile Database?

Beyond saving developers the headache and steep learning curve of Core Data, Realm Mobile Database provides distinctive advantages right out of the box.

Performance & Thread-Safety

Performance-wise, Realm Mobile Database has been proven to run queries and sync objects significantly faster than Core Data, and accessing the data concurrently isn’t a problem. That is, multiple sources can access the same object without the need to manage locks or worry about data inconsistencies.

Encryption

Realm Mobile Database provides its own encryption services to protect databases on disk using AES-256+SHA2 through 64-byte encryption.

This makes it so that all of the data stored on disk is transparently encrypted and decrypted with AES-256 as needed, and verified with a SHA-2 HMAC. The same encryption key must be supplied every time you obtain a Realm instance.

Cross-Platform

Unlike Core Data, Realm Mobile Database is truly cross-platform, supporting iOS, Android, JavaScript web apps, and Xamarin.

Reactive Nature

Because of the way the live objects work, you are able to wire up your UI elements to the data models and your UI will update reactively as the data changes! There is no complicated synchronization code or wiring logic needed, as you would have with Core Data.

When coupled with Realm Mobile Platform and the Realm Object Server, developers will gain the extra benefit of syncing their data to the cloud by simply setting the Realm Object URL.

Even using Realm Mobile Platform, you don’t have to worry about interrupted connections, as Realm has built-in offline capabilities and will queue any data changes to be sent to the server.

Clients

Realm has numerous distinguished clients that have openly adopted Realm Mobile Database, including Netflix and Starbucks.

Alternatives to Realm Mobile Database

Of course, Realm Mobile Database is not the only app storage solution. I already mentioned Apple’s own Core Data, and while it is inherently more complicated to learn, the fact that it belongs to Apple means it will be the de facto database solution for many iOS developers, and will continue to have a large community of developers and support material.

A solution that is somewhat similar to Realm Mobile Database is Google’s Firebase—although this is a combined client-side and server-side solution. Firebase is similarly easy to use and it's free to get started with, but the costs will scale as your usage does. One drawback with Firebase is that you are tightly coupled to their platform, whereas with Realm you are free to use your own back-end—or no back-end at all!

Your First Realm App

Assumed Knowledge

This tutorial assumes you have a working knowledge of Swift, but no Core Data or prior database knowledge is needed. 

As well as Realm, we'll be using the following parts of iOS:

  • UIKit: to demonstrate our data visually
  • CocoaPods: a third-party dependency library that will be used to install Realm Mobile Database

Objectives of This Tutorial

By the end of this tutorial, you will have developed a simple to-do app written in Swift and making use of Realm Mobile Database to persist data locally. You'll get to create a fully functioning Realm-powered to-do app, and along the way you'll learn the following concepts:

  1. setting up the Realm library on a new project, via CocoaPods
  2. setting up the App Delegate to import the Realm Library
  3. creating the ‘live-object’ model objects
  4. creating the View Controllers and Storyboard in the app UI
  5. connecting the data model to the view controllers and views

You can download the complete source code from the tutorial GitHub repo.

Set Up the Project

Okay, let’s get started creating our Realm app: RealmDo. We are going to create a new Xcode project, so go ahead and create a Master-Detail application.

Create a new project

Next, if you haven’t installed CocoaPods on your machine, you'll need to do so now. So jump into terminal and type the following:

$ sudo gem install cocoapods

You should then get a confirmation that cocoapods is indeed installed. While you are still in the terminal, navigate to the Xcode project you just created and type the following, to initialize a new Podfile:

$ pod init

You should see a new file named Podfile located in the root directory of your project. This file basically sets out the libraries we want to use in our project. You can refer to the official CocoaPods documentation for more information on how Podfiles work.

Next, we need to add the cocoapod library for Realm, so open up the Podfile in a text editor, and add the following underneath # Pods for RealmDo:

Save the file, exit, and type:
pod install

After CocoaPods finishes installing the library, it will ask us to close our Xcode project and open up the workspace. Do that, and we are ready to proceed with coding. We will start with the AppDelegate.

Set Up the App Delegate to Import the Realm Library

In our AppDelegate we are going to import the Realm library, so add the following to the AppDelegate.swift file:

Leave the class as is for now, so we can turn our focus to the model object.

Live Object Models

Defining models in Realm is dead simple; you just create a logical model class. In our project, we are going to store reminders, so let's create a class called Reminder.swift, with the following code:

For this tutorial, we only need this Reminder model, so we're all done! It’s that simple, and instantiating a model is just as easy, as we will find out later. 

Set Up the View Controllers and Storyboard

Now we focus our attention on the view controllers, but before we go to the MasterViewController.swift class, let's open up Main.storyboard and add a bar button on the top right, called Add, as shown below:

The app main storyboard

The project was initialized by Xcode with the datasource and delegate wired to the view controller, so all we need to do is add the button we just created to the view controller as an IBOutlet. Hold and drag from the button to the view controller in split-view mode, to generate the link.

Link the button to the view controller

Initializing Realm

Now, moving on to the MasterViewController.swift file, we declare the variables we are going to need, which should look something like the following:

First, on line (1), we declare the Realm variable which we are going reference to get to our datastore. Then, we lazy-load the remindersList calling the Realm objects for a list of all Reminder objects. Finally, we instantiate the Realm variable we declared at the start. Nothing too complicated so far!

Set Up the View Delegate and Datasource

Next, we set up our tableView delegate and datasource methods, as follows:

On line (4), we get a count of the remindersList list of objects, which will set the count for the number of rows in our one-section tableView.

Then, for each cell, we obtain the Reminder live object’s property to set the label, as well as flagging whether the item is marked as done or not.

Writing Changes to the Database

We want our users to be able to be able to toggle an item as done (and not done), which we indicate by changing the color of the label. We also want to make the table view editable (users will be able to remove cells by swiping from right to left), which we accomplish by adding the following code:

On line (6), this is the first time we are writing back to our database, which you simply do within a self.realm.write block. Note that all you need to do with an instance object is set its value, nothing more. So in this case we toggle the done value by doing item.done = !item.done

Line (7) is our second example of writing back to our database: we delete an object from the database by simply deleting the instance object.

Adding New Objects

We are making great progress, and in fact we're almost done! We're now able to load, edit, and delete our reminders, but we are missing one important action: adding a new reminder. To implement this, create a new @IBAction method, and wire up your storyboard’s Add toolbar button to the method. 

We're going to build a simple AlertViewController in our example, but as a separate exercise, try to refine the app by upgrading this to a new view controller instead. 

For now, go ahead and add the following code:

On line (8), we create a new reminder instance and set its properties. Then, on line (9) we add the reminder via self.realm.add(item).

Testing the App

So let’s try out the app, by building and running it in Simulator. Go ahead and add two reminders, and set one of them as done by tapping on it. If you exit your app and open it back up again, your items should still be there.

The completed app

Realm Browser

And that’s it! With little to no learning curve, and by bypassing any of the complexities of Core Data, we have a fully baked on-device back end. This is Realm Mobile Database. You can also verify that the data is on the device by downloading Realm Browser, a macOS app that allows us to view, debug and edit Realm data objects. 

Download the app from the Mac App Store and open the Realm database, which is located in your CoreSimulator/Devices/appID/data/… folder. The file you are looking for is db.realm.

Finding the Realm database

Opening it up, you should be able not just to view your data, but also to edit and add new data. Go ahead and try it!

Editing the database with Realm Browser

Conclusion

In this tutorial, you learned about Realm Mobile Database and why it is a powerful tool for the iOS developer. We also briefly touched on its server counterpart, Realm Mobile Platform, which we will cover in a separate tutorial.

We then built a simple reminders app that is powered by Realm Mobile Database. In just a few dozen lines of code, we were able to:

  1. set up a live-object model for the Reminder
  2. wire up our view controller to the data model
  3. declare, instantiate, load, add and delete from the Realm database

Finally, you saw how to use Realm Browser to debug and view your data.

This is has been a very basic introduction to Realm Mobile Database, but you can use it as a starting point for embarking on more advanced topics. As next steps, you could look at:

Be sure to explore some of the advanced themes in the above documentation, such as working with data relationshipstesting Realm objectsthreading, and encryption

And while you're here, be sure to check out some of our other posts on iOS app development!

2017-06-15T19:00:00.000Z2017-06-15T19:00:00.000ZDoron Katz

Realm Mobile Database for iOS

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-29018
Final product image
What You'll Be Creating

Introduction

In this tutorial, I'll show you how to use a powerful yet elegant on-device database solution for your iOS apps: Realm Mobile Database. An alternative to Apple Core Data or SQLite with object-relational mapping (ORM), Realm Mobile Database offers developers an easier and more natural way to store and query data.

What Is Realm Mobile Database?

Billed as a true object database, Realm differentiates itself from other similar libraries by treating data objects as live objects—meaning objects are automatically updated. They react responsively to changes and are easy to persist. Even better, you don’t have the steep learning curve that you would with Core Data or SQLite scripting. Instead, you can work in a truly object-oriented way. Realm Mobile Database has also been open-sourced as of 2016, and is available free of charge to developers.

In addition to Realm Mobile Database, the company also offers Realm Mobile Platform, its flagship PAAS to complement Realm Mobile Database with a server-side solution.

Realm Mobile Platform, extends that core with realtime data synchronization and event handling on the server side, all seamlessly connected to the apps. Developers use the platform to build apps with powerful functionality like messaging, collaboration, and offline-first capabilities. The platform is also ideal for mobilizing existing APIs, making it easy to build highly responsive and engaging apps connected to legacy systems and services. (realm.io)

So Realm Mobile Platform works on the server side in the same seamless way as Realm Mobile Database, providing automatic data synchronization and event handling between the client and server, and in the process abstracting away the complexities that arise when dealing with data synchronization. Realm Mobile Platform is beyond the scope of this tutorial, but I'll come back to it in a future post.

Why Realm Mobile Database?

Beyond saving developers the headache and steep learning curve of Core Data, Realm Mobile Database provides distinctive advantages right out of the box.

Performance & Thread-Safety

Performance-wise, Realm Mobile Database has been proven to run queries and sync objects significantly faster than Core Data, and accessing the data concurrently isn’t a problem. That is, multiple sources can access the same object without the need to manage locks or worry about data inconsistencies.

Encryption

Realm Mobile Database provides its own encryption services to protect databases on disk using AES-256+SHA2 through 64-byte encryption.

This makes it so that all of the data stored on disk is transparently encrypted and decrypted with AES-256 as needed, and verified with a SHA-2 HMAC. The same encryption key must be supplied every time you obtain a Realm instance.

Cross-Platform

Unlike Core Data, Realm Mobile Database is truly cross-platform, supporting iOS, Android, JavaScript web apps, and Xamarin.

Reactive Nature

Because of the way the live objects work, you are able to wire up your UI elements to the data models and your UI will update reactively as the data changes! There is no complicated synchronization code or wiring logic needed, as you would have with Core Data.

When coupled with Realm Mobile Platform and the Realm Object Server, developers will gain the extra benefit of syncing their data to the cloud by simply setting the Realm Object URL.

Even using Realm Mobile Platform, you don’t have to worry about interrupted connections, as Realm has built-in offline capabilities and will queue any data changes to be sent to the server.

Clients

Realm has numerous distinguished clients that have openly adopted Realm Mobile Database, including Netflix and Starbucks.

Alternatives to Realm Mobile Database

Of course, Realm Mobile Database is not the only app storage solution. I already mentioned Apple’s own Core Data, and while it is inherently more complicated to learn, the fact that it belongs to Apple means it will be the de facto database solution for many iOS developers, and will continue to have a large community of developers and support material.

A solution that is somewhat similar to Realm Mobile Database is Google’s Firebase—although this is a combined client-side and server-side solution. Firebase is similarly easy to use and it's free to get started with, but the costs will scale as your usage does. One drawback with Firebase is that you are tightly coupled to their platform, whereas with Realm you are free to use your own back-end—or no back-end at all!

Your First Realm App

Assumed Knowledge

This tutorial assumes you have a working knowledge of Swift, but no Core Data or prior database knowledge is needed. 

As well as Realm, we'll be using the following parts of iOS:

  • UIKit: to demonstrate our data visually
  • CocoaPods: a third-party dependency library that will be used to install Realm Mobile Database

Objectives of This Tutorial

By the end of this tutorial, you will have developed a simple to-do app written in Swift and making use of Realm Mobile Database to persist data locally. You'll get to create a fully functioning Realm-powered to-do app, and along the way you'll learn the following concepts:

  1. setting up the Realm library on a new project, via CocoaPods
  2. setting up the App Delegate to import the Realm Library
  3. creating the ‘live-object’ model objects
  4. creating the View Controllers and Storyboard in the app UI
  5. connecting the data model to the view controllers and views

You can download the complete source code from the tutorial GitHub repo.

Set Up the Project

Okay, let’s get started creating our Realm app: RealmDo. We are going to create a new Xcode project, so go ahead and create a Master-Detail application.

Create a new project

Next, if you haven’t installed CocoaPods on your machine, you'll need to do so now. So jump into terminal and type the following:

$ sudo gem install cocoapods

You should then get a confirmation that cocoapods is indeed installed. While you are still in the terminal, navigate to the Xcode project you just created and type the following, to initialize a new Podfile:

$ pod init

You should see a new file named Podfile located in the root directory of your project. This file basically sets out the libraries we want to use in our project. You can refer to the official CocoaPods documentation for more information on how Podfiles work.

Next, we need to add the cocoapod library for Realm, so open up the Podfile in a text editor, and add the following underneath # Pods for RealmDo:

Save the file, exit, and type:
pod install

After CocoaPods finishes installing the library, it will ask us to close our Xcode project and open up the workspace. Do that, and we are ready to proceed with coding. We will start with the AppDelegate.

Set Up the App Delegate to Import the Realm Library

In our AppDelegate we are going to import the Realm library, so add the following to the AppDelegate.swift file:

Leave the class as is for now, so we can turn our focus to the model object.

Live Object Models

Defining models in Realm is dead simple; you just create a logical model class. In our project, we are going to store reminders, so let's create a class called Reminder.swift, with the following code:

For this tutorial, we only need this Reminder model, so we're all done! It’s that simple, and instantiating a model is just as easy, as we will find out later. 

Set Up the View Controllers and Storyboard

Now we focus our attention on the view controllers, but before we go to the MasterViewController.swift class, let's open up Main.storyboard and add a bar button on the top right, called Add, as shown below:

The app main storyboard

The project was initialized by Xcode with the datasource and delegate wired to the view controller, so all we need to do is add the button we just created to the view controller as an IBOutlet. Hold and drag from the button to the view controller in split-view mode, to generate the link.

Link the button to the view controller

Initializing Realm

Now, moving on to the MasterViewController.swift file, we declare the variables we are going to need, which should look something like the following:

First, on line (1), we declare the Realm variable which we are going reference to get to our datastore. Then, we lazy-load the remindersList calling the Realm objects for a list of all Reminder objects. Finally, we instantiate the Realm variable we declared at the start. Nothing too complicated so far!

Set Up the View Delegate and Datasource

Next, we set up our tableView delegate and datasource methods, as follows:

On line (4), we get a count of the remindersList list of objects, which will set the count for the number of rows in our one-section tableView.

Then, for each cell, we obtain the Reminder live object’s property to set the label, as well as flagging whether the item is marked as done or not.

Writing Changes to the Database

We want our users to be able to be able to toggle an item as done (and not done), which we indicate by changing the color of the label. We also want to make the table view editable (users will be able to remove cells by swiping from right to left), which we accomplish by adding the following code:

On line (6), this is the first time we are writing back to our database, which you simply do within a self.realm.write block. Note that all you need to do with an instance object is set its value, nothing more. So in this case we toggle the done value by doing item.done = !item.done

Line (7) is our second example of writing back to our database: we delete an object from the database by simply deleting the instance object.

Adding New Objects

We are making great progress, and in fact we're almost done! We're now able to load, edit, and delete our reminders, but we are missing one important action: adding a new reminder. To implement this, create a new @IBAction method, and wire up your storyboard’s Add toolbar button to the method. 

We're going to build a simple AlertViewController in our example, but as a separate exercise, try to refine the app by upgrading this to a new view controller instead. 

For now, go ahead and add the following code:

On line (8), we create a new reminder instance and set its properties. Then, on line (9) we add the reminder via self.realm.add(item).

Testing the App

So let’s try out the app, by building and running it in Simulator. Go ahead and add two reminders, and set one of them as done by tapping on it. If you exit your app and open it back up again, your items should still be there.

The completed app

Realm Browser

And that’s it! With little to no learning curve, and by bypassing any of the complexities of Core Data, we have a fully baked on-device back end. This is Realm Mobile Database. You can also verify that the data is on the device by downloading Realm Browser, a macOS app that allows us to view, debug and edit Realm data objects. 

Download the app from the Mac App Store and open the Realm database, which is located in your CoreSimulator/Devices/appID/data/… folder. The file you are looking for is db.realm.

Finding the Realm database

Opening it up, you should be able not just to view your data, but also to edit and add new data. Go ahead and try it!

Editing the database with Realm Browser

Conclusion

In this tutorial, you learned about Realm Mobile Database and why it is a powerful tool for the iOS developer. We also briefly touched on its server counterpart, Realm Mobile Platform, which we will cover in a separate tutorial.

We then built a simple reminders app that is powered by Realm Mobile Database. In just a few dozen lines of code, we were able to:

  1. set up a live-object model for the Reminder
  2. wire up our view controller to the data model
  3. declare, instantiate, load, add and delete from the Realm database

Finally, you saw how to use Realm Browser to debug and view your data.

This is has been a very basic introduction to Realm Mobile Database, but you can use it as a starting point for embarking on more advanced topics. As next steps, you could look at:

Be sure to explore some of the advanced themes in the above documentation, such as working with data relationshipstesting Realm objectsthreading, and encryption

And while you're here, be sure to check out some of our other posts on iOS app development!

2017-06-15T19:00:00.000Z2017-06-15T19:00:00.000ZDoron Katz

What's New in Swift 4

$
0
0

Swift 4 has been in the works for the last few months. If you're like me, you might follow Swift Evolution to stay up to date with all the proposals and changes. Even if you do, now is a good time to review all the additions and changes to the language in this new iteration.

A snapshot of Swift 4 was already available a few weeks before Xcode 9 was announced at WWDC 2017. In this post you'll learn all about the new features introduced in Swift 4—from brand new APIs to improvements to the language syntax.

Let's first see how you can get the new compiler installed on your machine.

Xcode Setup

There are two ways to run Swift 4. You can either install the Xcode 9 beta if you have a developer account with access to it or you can set up Xcode 8 to run with a Swift 4 snapshot. In the former case, download the beta from your developer account download page.

If you prefer to use Xcode 8, simply head over to Swift.org to download the latest Swift 4.0 Development snapshot. Once the download finishes, double-click to open the .pkg file, which installs the snapshot. 

Switch to Xcode now and go to Xcode > Toolchains > Manage Toolchains. From there, select the newly installed Swift 4.0 snapshot. Restart Xcode and now Swift 4 will be used when compiling your project or playground. Note that all the code presented in this tutorial is also available in a GitHub repo.

Swift 40 snapshot setup in Xcode 83

New Features

Let's take a look at the new features added to Swift 4. One caveat: the language is still in beta, and we will most likely see more changes and bug fixes before the official version is released. Moreover, some of the most recently approved proposals may still not be implemented at this time, so keep an eye on future release notes to see what will be implemented and fixed.

Encoding and Decoding

JSON parsing is one of the most discussed topics in the Swift community. It's great to see that someone finally took care of writing proposals SE-0166 and SE-0167 and pushed the idea to refresh the archival and serialization APIs in the Foundation framework. In Swift 4, there is no longer any need to parse or encode your class, struct or enum manually. 

New Encodable and Decodable protocols have been added, and you can make your classes conform to them by simply adding Codable (which is an alias for Decodable & Encodable) to the class's inheritance list. Then you can use the JSONEncoder to encode an instance of the class:

As you can see, you instantiate a JSONEncoder object to convert the struct to a JSON string representation. There are a few settings that you can tweak to get the exact JSON format you need. For example, to set a custom date format, you can specify a dateEncodingStrategy in the following way:

The reverse process to decode a string works very similarly, thanks to the JSONDecoder class.

As you can see, by passing the type of the object to the decode method, we let the decoder know what object we expect back from the JSON data. If everything is successful, we'll get an instance of our model object ready to be used.

That's not even all the power and the modularity of the new API. Instead of using a JSONEncoder, you can use the new PropertyListEncoder and PropertyListDecoder in case you need to store data in a plist file. You can also create your own custom encoder and decoder. You only need to make your decoder conform to the Decoder and your encoder to the Encoder protocol.

Strings

As part of the String Manifesto, the String type also received quite a big refresh. It now conforms once again (after being removed in Swift 2) to the Collection protocol thanks to proposal SE-0163. So now you can simply enumerate over a string to get all characters.

Substring is a new type that conforms to the same StringProtocol to which String also conforms. You can create a new Substring by just subscripting a String. The following line creates a Substring by omitting the first and last character.

A nice addition that should make it easier to work with big pieces of text is multi-line strings. If you have to create a block of text which spans across multiple lines, you previously had to manually insert \n all over the place. This was very inelegant and difficult to manage. A better way now exists to write multi-line strings, as you can see from the following example:

There are few rules that go along with this new syntax. Each string begins with a triple quotation mark ("""). Then, if the entire string is indented, the spacing of the closing characters decides the spacing to be stripped from each line in the string. For example, if the closing character is indented by 2 tabs, the same amount will be removed from each line. If the string has a line that doesn't have this amount of spacing, the compiler will throw an error.

Key Paths

Key paths were added in Swift 3 to make it easier to reference properties in an object. Instead of referencing an object key with a simple string literal, key paths let us enforce a compile-time check that a type contains the required key—eliminating a common type of runtime error. 

Key paths were a nice addition to Swift 3, but their use was limited to NSObjects and they didn't really play well with structs. These were the main motivations behind proposal SE-0161 to give the API a refresh.

A new syntax was agreed by the community to specify a key path: the path is written starting with a \. It looks like the following:

The nameKeyPath object describes a reference to the name property. It can then be used as a subscript on that object.

If you change the variable from let to var of wwdc, you can also modify a specific property via the key-path subscript syntax.

One-Sided Ranges

SE-0172 proposed to add new prefix and postfix operators to avoid unnecessarily repeating a start or end index when it can be inferred. For example, if you wanted to subscript an array from the second index all the way to the last one, you could write it in the following way:

Previously, the endIndex had to be specified. Now, a shorter syntax exists:

Or, if you want to begin with the start index:

The same syntax can also be used for pattern matching in a switch statement.

Generic Subscripts

Before Swift 4, subscripts were required to define a specific return value type. SE-0148 proposed the possibility of defining a single generic subscript that would infer the return type based on the defined result value. Aside from the type annotation, it works pretty much the same way as before.

As you can see, this really improves the readability of your objects in the cases where you need to access them via the subscript syntax.

Class and Subtype Existentials

One of the missing features from the Swift type system to date has been the ability to constrain a class to a specific protocol. This has been fixed in Swift 4—you can now specify the type of an object and the protocols to which it has to conform, thanks to SE-0156. You can, for example, write a method that takes a UIView that conforms to the Reloadable protocol with the following syntax:

Dictionary and Set Improvements

Dictionary and Set also received a nice refresh in Swift 4. They are much more pleasant to use thanks to a few utility methods that have been added.

mapValues

Dictionary now has a mapValues method to change all values, avoiding the use of the generic map method that requires working with key, value tuples.

filter Return Type

The filter method now returns an object of the same type you're filtering with.

Defaults for Dictionary Lookup

When working with dictionaries, you can provide a default value when using the subscript syntax to avoid having to later unwrap an optional.

Dictionary Grouping Initializer

Finally, a Dictionary(grouping:) initializer has been introduced to facilitate creating a new dictionary by grouping the elements of an existing collection according to some criteria. 

In the following examples, we create a dictionary by grouping together all conferences that have the same starting letter. The dictionary will have a key for each starting letter in the conferences collection, with each value consisting of all keys that start with that letter.  

Resources

If you are interested in going deeper into the new Swift 4 features, here are a few more resources:

Conclusion

Now that you have taken a look at some of the major new features in Swift 4, you're probably champing at the bit to start using them, to help keep your codebase fresh and clean. Start to write your new code to take advantage of the useful new features and think about refactoring some of your previous code to make it simpler and easier to read.

In the meantime, check out some of our other posts on iOS app development!

2017-06-19T13:36:07.000Z2017-06-19T13:36:07.000ZPatrick Balestra

What's New in Swift 4

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-28939

Swift 4 has been in the works for the last few months. If you're like me, you might follow Swift Evolution to stay up to date with all the proposals and changes. Even if you do, now is a good time to review all the additions and changes to the language in this new iteration.

A snapshot of Swift 4 was already available a few weeks before Xcode 9 was announced at WWDC 2017. In this post you'll learn all about the new features introduced in Swift 4—from brand new APIs to improvements to the language syntax.

Let's first see how you can get the new compiler installed on your machine.

Xcode Setup

There are two ways to run Swift 4. You can either install the Xcode 9 beta if you have a developer account with access to it or you can set up Xcode 8 to run with a Swift 4 snapshot. In the former case, download the beta from your developer account download page.

If you prefer to use Xcode 8, simply head over to Swift.org to download the latest Swift 4.0 Development snapshot. Once the download finishes, double-click to open the .pkg file, which installs the snapshot. 

Switch to Xcode now and go to Xcode > Toolchains > Manage Toolchains. From there, select the newly installed Swift 4.0 snapshot. Restart Xcode and now Swift 4 will be used when compiling your project or playground. Note that all the code presented in this tutorial is also available in a GitHub repo.

Swift 40 snapshot setup in Xcode 83

New Features

Let's take a look at the new features added to Swift 4. One caveat: the language is still in beta, and we will most likely see more changes and bug fixes before the official version is released. Moreover, some of the most recently approved proposals may still not be implemented at this time, so keep an eye on future release notes to see what will be implemented and fixed.

Encoding and Decoding

JSON parsing is one of the most discussed topics in the Swift community. It's great to see that someone finally took care of writing proposals SE-0166 and SE-0167 and pushed the idea to refresh the archival and serialization APIs in the Foundation framework. In Swift 4, there is no longer any need to parse or encode your class, struct or enum manually. 

New Encodable and Decodable protocols have been added, and you can make your classes conform to them by simply adding Codable (which is an alias for Decodable & Encodable) to the class's inheritance list. Then you can use the JSONEncoder to encode an instance of the class:

As you can see, you instantiate a JSONEncoder object to convert the struct to a JSON string representation. There are a few settings that you can tweak to get the exact JSON format you need. For example, to set a custom date format, you can specify a dateEncodingStrategy in the following way:

The reverse process to decode a string works very similarly, thanks to the JSONDecoder class.

As you can see, by passing the type of the object to the decode method, we let the decoder know what object we expect back from the JSON data. If everything is successful, we'll get an instance of our model object ready to be used.

That's not even all the power and the modularity of the new API. Instead of using a JSONEncoder, you can use the new PropertyListEncoder and PropertyListDecoder in case you need to store data in a plist file. You can also create your own custom encoder and decoder. You only need to make your decoder conform to the Decoder and your encoder to the Encoder protocol.

Strings

As part of the String Manifesto, the String type also received quite a big refresh. It now conforms once again (after being removed in Swift 2) to the Collection protocol thanks to proposal SE-0163. So now you can simply enumerate over a string to get all characters.

Substring is a new type that conforms to the same StringProtocol to which String also conforms. You can create a new Substring by just subscripting a String. The following line creates a Substring by omitting the first and last character.

A nice addition that should make it easier to work with big pieces of text is multi-line strings. If you have to create a block of text which spans across multiple lines, you previously had to manually insert \n all over the place. This was very inelegant and difficult to manage. A better way now exists to write multi-line strings, as you can see from the following example:

There are few rules that go along with this new syntax. Each string begins with a triple quotation mark ("""). Then, if the entire string is indented, the spacing of the closing characters decides the spacing to be stripped from each line in the string. For example, if the closing character is indented by 2 tabs, the same amount will be removed from each line. If the string has a line that doesn't have this amount of spacing, the compiler will throw an error.

Key Paths

Key paths were added in Swift 3 to make it easier to reference properties in an object. Instead of referencing an object key with a simple string literal, key paths let us enforce a compile-time check that a type contains the required key—eliminating a common type of runtime error. 

Key paths were a nice addition to Swift 3, but their use was limited to NSObjects and they didn't really play well with structs. These were the main motivations behind proposal SE-0161 to give the API a refresh.

A new syntax was agreed by the community to specify a key path: the path is written starting with a \. It looks like the following:

The nameKeyPath object describes a reference to the name property. It can then be used as a subscript on that object.

If you change the variable from let to var of wwdc, you can also modify a specific property via the key-path subscript syntax.

One-Sided Ranges

SE-0172 proposed to add new prefix and postfix operators to avoid unnecessarily repeating a start or end index when it can be inferred. For example, if you wanted to subscript an array from the second index all the way to the last one, you could write it in the following way:

Previously, the endIndex had to be specified. Now, a shorter syntax exists:

Or, if you want to begin with the start index:

The same syntax can also be used for pattern matching in a switch statement.

Generic Subscripts

Before Swift 4, subscripts were required to define a specific return value type. SE-0148 proposed the possibility of defining a single generic subscript that would infer the return type based on the defined result value. Aside from the type annotation, it works pretty much the same way as before.

As you can see, this really improves the readability of your objects in the cases where you need to access them via the subscript syntax.

Class and Subtype Existentials

One of the missing features from the Swift type system to date has been the ability to constrain a class to a specific protocol. This has been fixed in Swift 4—you can now specify the type of an object and the protocols to which it has to conform, thanks to SE-0156. You can, for example, write a method that takes a UIView that conforms to the Reloadable protocol with the following syntax:

Dictionary and Set Improvements

Dictionary and Set also received a nice refresh in Swift 4. They are much more pleasant to use thanks to a few utility methods that have been added.

mapValues

Dictionary now has a mapValues method to change all values, avoiding the use of the generic map method that requires working with key, value tuples.

filter Return Type

The filter method now returns an object of the same type you're filtering with.

Defaults for Dictionary Lookup

When working with dictionaries, you can provide a default value when using the subscript syntax to avoid having to later unwrap an optional.

Dictionary Grouping Initializer

Finally, a Dictionary(grouping:) initializer has been introduced to facilitate creating a new dictionary by grouping the elements of an existing collection according to some criteria. 

In the following examples, we create a dictionary by grouping together all conferences that have the same starting letter. The dictionary will have a key for each starting letter in the conferences collection, with each value consisting of all keys that start with that letter.  

Resources

If you are interested in going deeper into the new Swift 4 features, here are a few more resources:

Conclusion

Now that you have taken a look at some of the major new features in Swift 4, you're probably champing at the bit to start using them, to help keep your codebase fresh and clean. Start to write your new code to take advantage of the useful new features and think about refactoring some of your previous code to make it simpler and easier to read.

In the meantime, check out some of our other posts on iOS app development!

2017-06-19T13:36:07.000Z2017-06-19T13:36:07.000ZPatrick Balestra
Viewing all 1836 articles
Browse latest View live