There's a basic problem in the world of mobile app development.
Developers invest a lot of time and money in creating a great app, and they deserve to be rewarded for that. But consumers are unwilling to pay. As Ryan Chang noted in a tutorial on pricing apps, paid apps only account for a small minority of overall app revenue.
One option, of course, is to offer a free app that includes in-app purchases. But many people hate those too, especially when the demands for money are too intrusive. So what's an honest app developer to do?
Amazon's newly-launched Amazon Underground offers an intriguing solution. Customers get to download and use your Android app completely free, and Amazon pays you for every minute that they spend using it.
Here's a one-minute explanation of how it works:
Customers do get shown some interstitial advertisements, so in a sense this is similar to other mobile advertising models. But what's different about Amazon Underground's model is that developers don't have to worry about setting up ads, or about whether users will click on them or not.
All that matters is how many people use your app and how long they use it for, so you can focus purely on creating a powerful user experience that keeps people coming back for more.
You then paid a flat rate per user per minute: in the US, that's $0.0020, and the rates are similar in other countries, allowing for currency conversion.
Are You Eligible?
Not every app is suitable for Amazon Underground. Because payment depends on active user engagement, a utility app that runs in the background or only gets used occasionally would not be suitable. Also, there are some eligibility requirements:
Your mobile app must be available for download from at least one other app store, such as Google Play, and be monetized in at least one of the following ways:
The app is available for purchase for a fee in all other app stores where it is sold.
The app contains in-app items that are available for purchase for a fee.
Your mobile app must not contain any subscription in-app items.
The features and gameplay of the Amazon Underground version of your app must be substantially similar to or better than the non-Underground version.
When you submit your app to the Amazon Appstore, you must make your app available for distribution on at least one non-Amazon mobile device.
In this tutorial, I will explain you step by step how to create a modern, hybrid, mobile application (iOS and Android) of your WordPress website using the latest technologies. We'll be using Ionic Framework, ECMAScript 6, npm, webpack, and Apache Cordova.
At the end of this tutorial you will get the following application. It has only three modules, a Home module that displays your latest posts, a Post module that displays a specific post, and a Menu module that displays the menu.
1. Tools
Ionic Framework
The beautiful, open source front-end SDK for developing amazing mobile apps with web technologies.
Ionic Framework ecosystem is large, including Ionic CLI (command line tool), Ionic Push (easy push notifications), and Ionic Platform (backend services). It is currently one of the top open-source projects on GitHub with more than 19,000 stars and over 600,000 apps created.
Ionic covers all your application's needs. However, for this tutorial I will only focus on Ionic Framework (or Ionic SDK), which is a set of AngularJS directives (Web Components) and services.
ECMAScript 6 (ES6)
ECMAScript 2015 (6th Edition) is the current version of the ECMAScript Language Specification standard. ES6 got officially approved and published as a standard on June 17, 2015 by the ECMA General Assembly.
ECMAScript 6 gives you access to a lot of new features, many of which are inspired by CoffeeScript, including as arrow functions, generators, classes, and let scoping. Even though ES6 got approved recently, you can use it right now using a JavaScript compiler, such as Babel.
Node Package Manager (npm)
Node Package Manager is the most popular package manager in the world. The number of packages is growing faster than Ruby, Python, and Java combined. npm runs on Node.js.
Why Not Bower?
We opt for npm, because using both Bower and npm in the same project is painful and CommonJS support with Bower isn't straightforward. CommonJS defines a module format to solve JavaScript scope outside the browser and npm supports this. CommonJS modules can be required using ES5 or ES6.
// ES5
var angular = require('angular');
// ES6
import angular from "angular";
webpack
In my opinion, webpack has been a game changer in the industry, exit complicated Grunt or Gulp scripts that you need to maintain. webpack allows you to require any type of file (.js, .coffee, .css, .scss, .png, .jpg, .svg, etc.) and pipe them through loaders to generate static assets that are available to your application.
The difference with Grunt and Gulp is that the majority of your needs (minification and compilation) can be covered by just adding some configuration, there's no need to create scripts. For instance, requiring a Sass file, compiling it, autoprefixing it, and injecting the resulting minified CSS into your application will be as simple as this:
I don't think I need to show you the equivalent using Gulp or Grunt. I think you get my point.
2. Prerequisites
This tutorial assumes that you have:
a basic knowledge of AngularJS and Ionic
a WordPress website ready to be queried (a local installation is fine)
a machine with Node.js, npm, Bower (we'll need it for some dependencies)
Git installed with write access without sudo on the project folder
3. Installation
Before we get started, you will need to install two things:
a WordPress plugin that turns your blog into a RESTFUL API
the application itself
RESTFUL API
To fetch the posts for your WordPress installation, you will need to install WP REST API plugin. Make sure that you install version 1.2.x as version 2.x is on its way.
In WordPress, go to Plugins > Add New.
Search for WP REST API (WP API).
Click Install Now to install the plugin.
If the installation is successful, click Activate Plugin to activate it.
If the installation was successful, open a browser and enter http://example.com/wp-json. This should give you a response similar to the one below.
{
"name": "Lorem Ipsum blog",
"description": "Just another WordPress site",
"URL": "http://yourDomainName.com/wp-json",
"routes": {},
"authentication": {},
"meta": {}
}
Application
To install the application, clone the repository, using the following commands.
# Clone the repository and give it a name (here myTutorial)
$ git clone https://github.com/tutsplus/Hybrid-WordPressIonicAngularJS.git myTutorial
# Open the project
$ cd myTutorial
Next, create a configuration file and install the dependencies.
# Copy the default config to your personal config
$ cp config/default.config.json config/config.json
# Install dependencies
$ npm install
To make sure both the application and the REST API work together, open config/config.json. This is your personal configuration file, which is ignored by Git. Change the base URL of the API to the one for your WordPress installation.
Run npm run devserver and open http://localhost:8080/webpack-dev-server/ in a browser. If everything works as expected, you should be in front of a running application that displays your WordPress posts. I have created a demo application to give you an idea of what to expect.
Now that you can see the result of what we are after, let me go through the details. Note that the following code samples are simplified. You can find the source code on GitHub.
4. Dependencies
The npm install command installed several libraries. Some of them are direct dependencies while the rest are development dependencies.
Direct Dependencies
The direct dependencies are dependencies that your application needs in order to run properly when built.
Notice that the application doesn't directly depend on AngularJS, because ionic-sdk already includes angular.js, angular-animate.js, angular-sanitize.js, and angular-ui-router.js.
wp-api-angularjs (WordPress WP API client for AngularJS) is a set of AngularJS services that allow communication with the REST API plugin that you installed earlier. You can see the complete list of dependencies on GitHub.
Development Dependencies
Development dependencies are mostly webpack loaders. Loaders are functions that take the source of a resource file, apply some changes, and return the new source. We need loaders that handle .scss, .js (ES6), .html, and .json. You can see a complete list of development dependencies on GitHub.
5. Application Architecture
I have been developing AngularJS applications for a long time and after a lot of experimenting I have committed to the following architecture:
a file that can be edited live under the src/ or /lib folder
every AngularJS module needs a proper folder
every module file *.module.js must define a unique namespace (and be the only place where this namespace appears)
every module file *.module.js must declare all its dependencies (even if dependencies are already injected in the app)
every module file *.module.js must declare all its configs, controllers, services, filters, etc.
every config, controller, service, filter, etc. must export a function (CommonJS)
if a module needs a specific style, the .scss file must live within the module
These recommendations are powerful as they assure that you to have loosely coupled modules that can be shared by several applications without running into problems.
This is what the application folder structure looks like:
When using webpack, an entry point is necessary. Our entry point is lib/index.js. It contains our application's basic dependencies (such as ionic.bundle that contains AngularJS), our home-made modules, and adds the Sass entry point.
// Ionic, Angular & WP-API client
import 'ionic-sdk/release/js/ionic.bundle';
import 'wp-api-angularjs/dist/wp-api-angularjs.bundle';
// Our modules
import modHome from './home/home.module.js';
import modPost from './post/post.module.js';
import modMenu from './menu/menu.module.js';
// Style entry point
import './scss/bootstrap';
Now that we have imported our dependencies we can create our application module. Let's call our app prototype. It has ionic, wp-api-angularjs, and our home-made modules as dependencies.
// Create our prototype module
let mod = angular.module('prototype', [
'ionic',
'wp-api-angularjs',
modHome,
modMenu,
modPost
]);
Once the module is created, we can export it as a standard CommonJS module.
export default mod = mod.name;
This is a great example of what an AngularJS module should look like.
Routing
Our application has a side menu <ion-side-menu ui-view="menu"> in which the Menu module will be rendered. It also has a content section <ion-nav-view name="content"> in which the Home and Post modules will appear.
The ui-view directive is part of the UI-router that Ionic uses. It tells $state (UI-router service) where to place your templates. Similarly, the name directive attached to <ion-nav-view> is a custom Ionic directive that is using ui-view underneath. You can consider both directives identical.
Here is a simplified version of the root state, the state that all modules share:
The Menu module is very simple. Its purpose is to add a menu inside <ion-side-menu>. Without this module, the side menu would be blank. The menu module declares only a config file, it has ionic and ui.router as dependencies.
import modConfig from './menu.config';
let mod = angular.module('prototype.menu', [
'ionic',
'ui.router'
]);
mod.config(modConfig);
export default mod = mod.name;
The most interesting part is the configuration. We do not want to create a state for the Menu module as it is available everywhere. Instead, we decorate the root state with the menu content. With the ui-view="menu" being defined in the root state, we need to use menu@root to refer to it.
The Home module displays the latests posts of your WordPress website. It has a config file, a controller, and it depends on the following libraries:
ionic
ui.router
wp-api-angularjs
import modConfig from './home.config';
import modController from './home.controller';
let mod = angular.module('prototype.home', [
'ionic',
'ui.router',
'wp-api-angularjs'
]);
mod.config(modConfig);
mod.controller('HomeController', modController);
export default mod = mod.name
home.config.js
The config adds a new state, root.home, with the /home URL that has a template and a controller (both living within the module).
The template has a ion-refresher directive that allows users to reload the page by pulling the page down. It also has a ion-infinite-scroll directive that calls the loadMore function when reached. Posts are displayed using the ng-repeat directive.
Tip: Use the track by expression for better performance. It minimizes DOM manipulation when a post is updated.
<ion-view><ion-nav-title>Home</ion-nav-title><ion-content><ion-refresher pulling-text="Pull to refresh" on-refresh="homeCtrl.refresh()"></ion-refresher><div class="list card" ng-repeat="post in homeCtrl.posts track by post.ID"><!-- THE POST DETAILS --></div><ion-infinite-scroll immediate-check="true" on-infinite="homeCtrl.loadMore()"></ion-infinite-scroll></ion-content></ion-view>
The Post module displays only one post. It has a config file, a controller, and it depends on the same librairies as the Home module.
post.module.js
import modConfig from './post.config';
import modController from './post.controller';
let mod = angular.module('prototype.post', [
'ionic',
'ui.router',
'wp-api-angularjs'
]);
mod.config(modConfig);
mod.controller('PostController', modController);
export default mod = mod.name
Similar to the Home module, the config adds a new state, root.post, with the /post/:id URL. It also registers a view and a controller.
The controller retrieves the post specified in the url /post/:id via the $stateParams service (UI router service).
export default function ($scope, $log, $wpApiPosts, $stateParams) {
'ngInject';
var vm = this;
vm.post = null;
$scope.$on('$ionicView.loaded', init);
function init() {
$wpApiPosts.$get($stateParams.id).then((response) => {
vm.post = response.data;
});
}
}
post.html
The template has a ion-spinner directive that displays a loader while the data is being fetched from the WordPress REST API. When the post is loaded, we use an Ionic card to render the author avatar, the post title, and the post content.
Tip: Use the bindOnce expression, ::, (introduced in Angular 1.3) to avoid watching data that will not change over time.
First, we import our variables. We then import the Ionic styles. Importing our variables before Ionic allows us to overwrite whatever Sass variables Ionic has declared.
For example, if you want the positive color to be red instead of blue, you can overwrite it like this:
$positive: red !default;
6. Android and iOS
Installation
Run the following commands inside the project folder and chose the platform you want to build for.
$ cp config.dist.xml config.xml
$ npm run installCordova
Which platforms do you want to build? (android ios):
In addition to installing platforms within the /platforms folder, the script will install one plugin. For the demo, we need the cordova-plugin-whitelist plugin. It is necessary to allow the application to query the WordPress REST API we created earlier.
If you open config.xml, you will see that we allow access to any kind of origin (<access origin="*" />). Of course, this is only for demo purposes. If you deploy your application to production, then make sure you restrict access like this:
<access origin="http://example.com" />
Android
Prerequisites
Android SDK
Ant
Running the npm run runAndroid command is a shortcut for rm -rf www/* && webpack && cordova run android. This removes everything within the www folder, dumps a non-minified version of the app in it, and runs the android command. If an Android device is connected (run adb devices to make sure), the command will load the app on the device, otherwise it will use the Android emulator.
# Run Android
$ npm run runAndroid
iOS
Prerequisites
OS X
Xcode
If you do not have an Apple device, you should install the iOS Simulator. It's really good and better than the Android emulator.
$ sudo npm install -g ios-sim
Running npm run runIosEmulator is a shortcut for rm -rf www/* && webpack && cordova run ios. The npm run runIosDevice command is a shortcut for rm -rf www/* && webpack && cordova run ios --device.
# Run iOS
$ npm run runIosEmulator
$ npm run runIosDevice
Conclusion
With this tutorial, I've tried to show you how easy it is to create a hybrid, mobile application for your WordPress website. You should now be able to:
create loosely coupled modules that respect CommonJS specs
import CommonJS modules with ECMAScript 6
use the WordPress REST API client side (with wp-api-angularjs)
leverage Ionic Framework to create a great user interface
use webpack to bundle your application
use Cordova to run the application on iOS and Android
If you want to go further, then take a look at a project I created few months ago, WordPress Hybrid Client.
WordPress Hybrid Client
WordPress Hybrid Client (WPHC) is an open-source project available on GitHub that helps you to create iOS and Android versions of your WordPress website for free. WPHC is based on the same technology stack that we used in this tutorial.
tag:code.tutsplus.com,2005:PostPresenter/cms-24170What You'll Be Creating
Introduction
In this tutorial, I will explain you step by step how to create a modern, hybrid, mobile application (iOS and Android) of your WordPress website using the latest technologies. We'll be using Ionic Framework, ECMAScript 6, npm, webpack, and Apache Cordova.
At the end of this tutorial you will get the following application. It has only three modules, a Home module that displays your latest posts, a Post module that displays a specific post, and a Menu module that displays the menu.
1. Tools
Ionic Framework
The beautiful, open source front-end SDK for developing amazing mobile apps with web technologies.
Ionic Framework ecosystem is large, including Ionic CLI (command line tool), Ionic Push (easy push notifications), and Ionic Platform (backend services). It is currently one of the top open-source projects on GitHub with more than 19,000 stars and over 600,000 apps created.
Ionic covers all your application's needs. However, for this tutorial I will only focus on Ionic Framework (or Ionic SDK), which is a set of AngularJS directives (Web Components) and services.
ECMAScript 6 (ES6)
ECMAScript 2015 (6th Edition) is the current version of the ECMAScript Language Specification standard. ES6 got officially approved and published as a standard on June 17, 2015 by the ECMA General Assembly.
ECMAScript 6 gives you access to a lot of new features, many of which are inspired by CoffeeScript, including as arrow functions, generators, classes, and let scoping. Even though ES6 got approved recently, you can use it right now using a JavaScript compiler, such as Babel.
Node Package Manager (npm)
Node Package Manager is the most popular package manager in the world. The number of packages is growing faster than Ruby, Python, and Java combined. npm runs on Node.js.
Why Not Bower?
We opt for npm, because using both Bower and npm in the same project is painful and CommonJS support with Bower isn't straightforward. CommonJS defines a module format to solve JavaScript scope outside the browser and npm supports this. CommonJS modules can be required using ES5 or ES6.
// ES5
var angular = require('angular');
// ES6
import angular from "angular";
webpack
In my opinion, webpack has been a game changer in the industry, exit complicated Grunt or Gulp scripts that you need to maintain. webpack allows you to require any type of file (.js, .coffee, .css, .scss, .png, .jpg, .svg, etc.) and pipe them through loaders to generate static assets that are available to your application.
The difference with Grunt and Gulp is that the majority of your needs (minification and compilation) can be covered by just adding some configuration, there's no need to create scripts. For instance, requiring a Sass file, compiling it, autoprefixing it, and injecting the resulting minified CSS into your application will be as simple as this:
I don't think I need to show you the equivalent using Gulp or Grunt. I think you get my point.
2. Prerequisites
This tutorial assumes that you have:
a basic knowledge of AngularJS and Ionic
a WordPress website ready to be queried (a local installation is fine)
a machine with Node.js, npm, Bower (we'll need it for some dependencies)
Git installed with write access without sudo on the project folder
3. Installation
Before we get started, you will need to install two things:
a WordPress plugin that turns your blog into a RESTFUL API
the application itself
RESTFUL API
To fetch the posts for your WordPress installation, you will need to install WP REST API plugin. Make sure that you install version 1.2.x as version 2.x is on its way.
In WordPress, go to Plugins > Add New.
Search for WP REST API (WP API).
Click Install Now to install the plugin.
If the installation is successful, click Activate Plugin to activate it.
If the installation was successful, open a browser and enter http://example.com/wp-json. This should give you a response similar to the one below.
{
"name": "Lorem Ipsum blog",
"description": "Just another WordPress site",
"URL": "http://yourDomainName.com/wp-json",
"routes": {},
"authentication": {},
"meta": {}
}
Application
To install the application, clone the repository, using the following commands.
# Clone the repository and give it a name (here myTutorial)
$ git clone https://github.com/tutsplus/Hybrid-WordPressIonicAngularJS.git myTutorial
# Open the project
$ cd myTutorial
Next, create a configuration file and install the dependencies.
# Copy the default config to your personal config
$ cp config/default.config.json config/config.json
# Install dependencies
$ npm install
To make sure both the application and the REST API work together, open config/config.json. This is your personal configuration file, which is ignored by Git. Change the base URL of the API to the one for your WordPress installation.
Run npm run devserver and open http://localhost:8080/webpack-dev-server/ in a browser. If everything works as expected, you should be in front of a running application that displays your WordPress posts. I have created a demo application to give you an idea of what to expect.
Now that you can see the result of what we are after, let me go through the details. Note that the following code samples are simplified. You can find the source code on GitHub.
4. Dependencies
The npm install command installed several libraries. Some of them are direct dependencies while the rest are development dependencies.
Direct Dependencies
The direct dependencies are dependencies that your application needs in order to run properly when built.
Notice that the application doesn't directly depend on AngularJS, because ionic-sdk already includes angular.js, angular-animate.js, angular-sanitize.js, and angular-ui-router.js.
wp-api-angularjs (WordPress WP API client for AngularJS) is a set of AngularJS services that allow communication with the REST API plugin that you installed earlier. You can see the complete list of dependencies on GitHub.
Development Dependencies
Development dependencies are mostly webpack loaders. Loaders are functions that take the source of a resource file, apply some changes, and return the new source. We need loaders that handle .scss, .js (ES6), .html, and .json. You can see a complete list of development dependencies on GitHub.
5. Application Architecture
I have been developing AngularJS applications for a long time and after a lot of experimenting I have committed to the following architecture:
a file that can be edited live under the src/ or /lib folder
every AngularJS module needs a proper folder
every module file *.module.js must define a unique namespace (and be the only place where this namespace appears)
every module file *.module.js must declare all its dependencies (even if dependencies are already injected in the app)
every module file *.module.js must declare all its configs, controllers, services, filters, etc.
every config, controller, service, filter, etc. must export a function (CommonJS)
if a module needs a specific style, the .scss file must live within the module
These recommendations are powerful as they assure that you to have loosely coupled modules that can be shared by several applications without running into problems.
This is what the application folder structure looks like:
When using webpack, an entry point is necessary. Our entry point is lib/index.js. It contains our application's basic dependencies (such as ionic.bundle that contains AngularJS), our home-made modules, and adds the Sass entry point.
// Ionic, Angular & WP-API client
import 'ionic-sdk/release/js/ionic.bundle';
import 'wp-api-angularjs/dist/wp-api-angularjs.bundle';
// Our modules
import modHome from './home/home.module.js';
import modPost from './post/post.module.js';
import modMenu from './menu/menu.module.js';
// Style entry point
import './scss/bootstrap';
Now that we have imported our dependencies we can create our application module. Let's call our app prototype. It has ionic, wp-api-angularjs, and our home-made modules as dependencies.
// Create our prototype module
let mod = angular.module('prototype', [
'ionic',
'wp-api-angularjs',
modHome,
modMenu,
modPost
]);
Once the module is created, we can export it as a standard CommonJS module.
export default mod = mod.name;
This is a great example of what an AngularJS module should look like.
Routing
Our application has a side menu <ion-side-menu ui-view="menu"> in which the Menu module will be rendered. It also has a content section <ion-nav-view name="content"> in which the Home and Post modules will appear.
The ui-view directive is part of the UI-router that Ionic uses. It tells $state (UI-router service) where to place your templates. Similarly, the name directive attached to <ion-nav-view> is a custom Ionic directive that is using ui-view underneath. You can consider both directives identical.
Here is a simplified version of the root state, the state that all modules share:
The Menu module is very simple. Its purpose is to add a menu inside <ion-side-menu>. Without this module, the side menu would be blank. The menu module declares only a config file, it has ionic and ui.router as dependencies.
import modConfig from './menu.config';
let mod = angular.module('prototype.menu', [
'ionic',
'ui.router'
]);
mod.config(modConfig);
export default mod = mod.name;
The most interesting part is the configuration. We do not want to create a state for the Menu module as it is available everywhere. Instead, we decorate the root state with the menu content. With the ui-view="menu" being defined in the root state, we need to use menu@root to refer to it.
The Home module displays the latests posts of your WordPress website. It has a config file, a controller, and it depends on the following libraries:
ionic
ui.router
wp-api-angularjs
import modConfig from './home.config';
import modController from './home.controller';
let mod = angular.module('prototype.home', [
'ionic',
'ui.router',
'wp-api-angularjs'
]);
mod.config(modConfig);
mod.controller('HomeController', modController);
export default mod = mod.name
home.config.js
The config adds a new state, root.home, with the /home URL that has a template and a controller (both living within the module).
The template has a ion-refresher directive that allows users to reload the page by pulling the page down. It also has a ion-infinite-scroll directive that calls the loadMore function when reached. Posts are displayed using the ng-repeat directive.
Tip: Use the track by expression for better performance. It minimizes DOM manipulation when a post is updated.
<ion-view><ion-nav-title>Home</ion-nav-title><ion-content><ion-refresher pulling-text="Pull to refresh" on-refresh="homeCtrl.refresh()"></ion-refresher><div class="list card" ng-repeat="post in homeCtrl.posts track by post.ID"><!-- THE POST DETAILS --></div><ion-infinite-scroll immediate-check="true" on-infinite="homeCtrl.loadMore()"></ion-infinite-scroll></ion-content></ion-view>
The Post module displays only one post. It has a config file, a controller, and it depends on the same librairies as the Home module.
post.module.js
import modConfig from './post.config';
import modController from './post.controller';
let mod = angular.module('prototype.post', [
'ionic',
'ui.router',
'wp-api-angularjs'
]);
mod.config(modConfig);
mod.controller('PostController', modController);
export default mod = mod.name
Similar to the Home module, the config adds a new state, root.post, with the /post/:id URL. It also registers a view and a controller.
The controller retrieves the post specified in the url /post/:id via the $stateParams service (UI router service).
export default function ($scope, $log, $wpApiPosts, $stateParams) {
'ngInject';
var vm = this;
vm.post = null;
$scope.$on('$ionicView.loaded', init);
function init() {
$wpApiPosts.$get($stateParams.id).then((response) => {
vm.post = response.data;
});
}
}
post.html
The template has a ion-spinner directive that displays a loader while the data is being fetched from the WordPress REST API. When the post is loaded, we use an Ionic card to render the author avatar, the post title, and the post content.
Tip: Use the bindOnce expression, ::, (introduced in Angular 1.3) to avoid watching data that will not change over time.
First, we import our variables. We then import the Ionic styles. Importing our variables before Ionic allows us to overwrite whatever Sass variables Ionic has declared.
For example, if you want the positive color to be red instead of blue, you can overwrite it like this:
$positive: red !default;
6. Android and iOS
Installation
Run the following commands inside the project folder and chose the platform you want to build for.
$ cp config.dist.xml config.xml
$ npm run installCordova
Which platforms do you want to build? (android ios):
In addition to installing platforms within the /platforms folder, the script will install one plugin. For the demo, we need the cordova-plugin-whitelist plugin. It is necessary to allow the application to query the WordPress REST API we created earlier.
If you open config.xml, you will see that we allow access to any kind of origin (<access origin="*" />). Of course, this is only for demo purposes. If you deploy your application to production, then make sure you restrict access like this:
<access origin="http://example.com" />
Android
Prerequisites
Android SDK
Ant
Running the npm run runAndroid command is a shortcut for rm -rf www/* && webpack && cordova run android. This removes everything within the www folder, dumps a non-minified version of the app in it, and runs the android command. If an Android device is connected (run adb devices to make sure), the command will load the app on the device, otherwise it will use the Android emulator.
# Run Android
$ npm run runAndroid
iOS
Prerequisites
OS X
Xcode
If you do not have an Apple device, you should install the iOS Simulator. It's really good and better than the Android emulator.
$ sudo npm install -g ios-sim
Running npm run runIosEmulator is a shortcut for rm -rf www/* && webpack && cordova run ios. The npm run runIosDevice command is a shortcut for rm -rf www/* && webpack && cordova run ios --device.
# Run iOS
$ npm run runIosEmulator
$ npm run runIosDevice
Conclusion
With this tutorial, I've tried to show you how easy it is to create a hybrid, mobile application for your WordPress website. You should now be able to:
create loosely coupled modules that respect CommonJS specs
import CommonJS modules with ECMAScript 6
use the WordPress REST API client side (with wp-api-angularjs)
leverage Ionic Framework to create a great user interface
use webpack to bundle your application
use Cordova to run the application on iOS and Android
If you want to go further, then take a look at a project I created few months ago, WordPress Hybrid Client.
WordPress Hybrid Client
WordPress Hybrid Client (WPHC) is an open-source project available on GitHub that helps you to create iOS and Android versions of your WordPress website for free. WPHC is based on the same technology stack that we used in this tutorial.
Raygun is a powerful new crash reporting service for your applications. Raygun automatically detects, discovers and diagnoses errors and crashes that are happening in your software applications, notifying you of issues that are affecting your end users.
If you've ever clicked "Don't Send" on an operating system crash reporting dialog then you know that few users actively report bugs—most simply walk away in frustration. In fact, a survey by Compuware reported that only 16% of users try a crashing app more than twice. It's vital to know if your software is crashing for your users. Raygun makes this easy.
With just a few short lines of code, you can integrate Raygun into your development environment in minutes. Raygun supports all major programming languages and platforms, so simply select the language you want to get started with. You'll instantly begin receiving reports of errors and crashes and will be able to study diagnostic information and stack traces on the Raygun dashboard. For this tutorial, I'll show you examples of tracking JavaScript apps such as Ghost and PHP-based WordPress.
By pinpointing problems for you and telling you exactly where to look, Raygun helps you build healthier, more reliable software to delight your users and keep them coming back.
More importantly, Raygun is built for teams and supports integrations for workplace software such as team chat, e.g. Slack and Hipchat, project management tools, e.g. JIRA and Sprintly, and issue trackers, e.g. GitHub and Bitbucket. Raygun gives your team peace of mind that your software is performing as you want it to—flawlessly.
In this tutorial, I'll walk you through setting up your application with Raygun step by step so you can begin using their 30-day free trial.
If you have any requests for future tutorials or questions and comments on today's, please post them below. You can also reach me on Twitter @reifman or email me directly.
Getting Started
Signing Up for the Raygun 30-Day Free Trial
Trying out Raygun is easy (and free). When you visit the home page (shown above), just click the green Free Trial button which will take you to the signup form:
As soon as you sign up, you'll begin receiving a helpful daily guided email as you learn to use the product:
One of the most powerful features of Raygun is that it works with all the major programming languages and platforms. And it's amazingly easy to integrate. Just copy and paste the code into your application and Raygun will start monitoring for errors. In the case of WordPress, they provide a pre-built plugin.
How Much Does It Cost?
Pricing plans for Raygun start at $49 monthly but can be discounted nearly 10% when paid annually.
You might also be interested in Raygun Enterprise which includes either massive cloud support or the ability to securely self-host a version of the service.
Integrating Raygun With Your Application
After signing up, you'll be presented with a short Raygun integration wizard. It starts with selecting your language of choice. Here's the initial dashboard that you'll see:
Here's an example of integrating for use with any JavaScript code or platform.
Using Raygun With JavaScript
Once you select JavaScript, you'll be shown your Application API Key (the key is the same for all platforms you choose).
Raygun is easy to use regardless of which JavaScript package management system you prefer:
For example, with Bower, run:
bower install raygun4js
From NuGet, open the console and run:
Install-Package raygun4js
But, you can also just load the library from Raygun's CDN within your application:
Once you've installed the plugin, you load the configuration menu from the WordPress dashboard and provide your API key:
Within a minute, you'll start seeing errors collect in the Raygun dashboard. If not, click the Send Test Error button to trigger one.
The Raygun Dashboard
Initially, you'll see an empty dashboard:
But, once you've chosen your language and integrated your application, you'll see a dashboard like this—oh, theme developers—in which Raygun helped me discover a plethora of WordPress theme code that hadn't been kept up to date with the latest versions of PHP.
Tracking Errors Across Code Deployments
When you integrate Raygun with your deployment tools, it can track errors according to specific versions of released software. This can help you identify and repair bad deployments quickly and easily:
Raygun currently lets you assign error groups to one of five statuses. These are:
Active
Resolved
Resolved In Version x.xx
Ignored
Permanently Ignored
When an error is first received it is assigned to Active and is visible in the first tab. You can then take action to change it to another status.
For example, as soon as I activated Raygun with WordPress and discovered a plethora of theme-related PHP compatibility issues, my email queue began to fill—but this was easily resolved by asking Raygun to only notify me of new reports.
You can also filter and manage issues by status through the interface quite easily. For example, it would be easy to delete all the errors resolved in WordPress version 4.3.
Raygun Error Detailed Views
When you click on errors, Raygun shows you their detail view with stack trace and a summary of which users and browsers or devices are being affected:
In detail view, Raygun also allows you and your team to comment and discuss specific issues:
Raygun User Tracking
If you implement user tracking with your Raygun integration, you can see exactly which of your authenticated users have run into specific errors and how often:
Raygun offers easy documentation for linking error reports to the current signed-in user. Here's an example for JavaScript:
By default, Raygun4JS assigns a unique anonymous ID for the current user. This is stored as a cookie. If the current user changes, to reset it and assign a new ID you can call:
Raygun.resetAnonymousUser();
To disable anonymous user tracking, call Raygun.init('apikey', { disableAnonymousUserTracking: true });.
You can provide additional information about the currently logged in user to Raygun by calling: Raygun.setUser('unique_user_identifier');.
This method takes additional parameters that are used when reporting over the affected users. The full method signature is:
setUser: function (user, isAnonymous, email, fullName, firstName, uuid)
Managing Your Team
Raygun is built around tracking issues across development teams. Through the settings area, it's easy to add applications that you're tracking and invite team members to participate:
As mentioned above, Raygun easily integrates with other team-based tools such as chat (Slack, Hipchat, etc.), project management (JIRA, Sprintly, etc.) and issue trackers (GitHub, Bitbucket, etc.).
Helpful Customer Support
Raygun support is excellent. In addition to the web-based documentation and email welcome guides, there's helpful support personnel (like Nick) ready to guide you deeper into the service—Nick's tips and availability just popped up as I was reviewing the service:
The Raygun API
If you'd like to tailor or customize event triggers, you can post errors via the Raygun API however you'd like from your application. This can be helpful for developers wishing to integrate monitoring or specialized reporting across their services or to make the development process easier.
In Summary
I hope you've found Raygun easy to use and helpful to your development requirements. To recap, here are some of the major benefits of the service:
Raygun provides a complete overview of problems across your entire development stack. Intelligent grouping of errors lets you see the highest priority issues rather than flooding you with notifications for every error.
Raygun supports all major programming languages and platforms. Every developer can use it. Developer time is expensive, so stop wasting time trying to hunt down bugs. Fix issues faster and build more features instead!
Raygun is built for teams. You can invite unlimited team members to your account—no restrictions. Raygun helps you create a team workflow for fixing bugs and provides custom notifications and a daily digest of error events for all of your team.
For large corporate entities, Raygun Enterprise can provide cloud support or the ability to securely self-host a version of the service for your needs.
When you give Raygun a try, please let us know your questions and comments below. You can also reach me on Twitter @reifman or email me directly. Or, if Raygun saves you a ton of time right away, you can browse my Tuts+ instructor page to read the other tutorials I've written.
Alongside iOS 9 and watchOS 2, Apple introduced on-demand resources, a new API for delivering content to your applications while also reducing the amount of space the application takes up on the user's device. With on-demand resources, you can tag specific assets of your application, have them hosted on Apple's servers, allowing your users to download them when needed. In this tutorial, I am going show you the basics of on-demand resources by creating a basic image viewer application.
Prerequisites
This tutorial requires that you are running Xcode 7+ and are familiar with iOS development. You will also need to download the starter project GitHub.
1. On-Demand Resources
Benefits
On-demand resources were introduced in iOS 9 and watchOS 2 for the main purpose of reducing the amount of space individual apps take up on a device. Another important advantage of on-demand resources is that your app can be downloaded and opened by users much quicker.
On-demand resources work by assigning unique tags to resources within Xcode to create what's called an asset pack. These packs can include anything from asset catalogs (images, SpriteKit textures, data, etc.) or even just other files, such as OpenGL and Metal shaders as well as SpriteKit and SceneKit scenes and particle systems.
When you submit your app to the App Store, these resources are also uploaded and are hosted there in order to be downloaded at any time. To download asset packs at runtime in an app, you simply use the tag for each pack that you assigned in Xcode.
Categories
The two main aspects of an app that uses on-demand resources are the app bundle, which is full of executable code for your app and essential assets, such as user interface icons, and asset packs.
For these asset packs, there are three main categories which you can organize in Xcode:
Initial Install: This is for content that is needed for your app to run for the first time, but that can be deleted later. This could include the first few levels of a game, which are no longer needed once the player progresses far enough into the game.
Prefetched: This category includes content that you want to be downloaded immediately after your app has finished installing. This type of content is recommended for resources that are not required for your app to function after installing it, but that are needed for a better user experience. A good example are tutorials for a game.
On Demand: This category is aimed at content that you need at a later time and your app can function without. When working with on-demand resources, this is the most common type of category that you will use.
Limits
Apps that are built with support for on-demand resources must also stick to the following limits with regards to file size:
2GB for the iOS app bundle
2GB for the initial install tags
2GB for the prefetched tags
2GB for in-use resources. This is only important when your application is running and using on-demand resources.
512MB for each individual asset pack. No single tag can contain more than this amount of data. If you go over this limit, Xcode will give you a warning and will allow you to still test and develop your app. Any submission attempts to the App Store, however, will fail.
20GB for all the resources hosted by Apple. This is the total amount of resources your app can possibly download at any one time. While only 2GB can be used at any one time, if a user's device has enough storage, up to 20GB of your resources can be downloaded and made accessible to your app at any time.
App Slicing
Note that the 20GB total does not account for app slicing while all of the other totals do. What is app slicing? App slicing is another feature that was introduced in iOS 9 to reduce the size of applications. It does this by only looking at the resources specific to the device that the app is being installed on. For example, if asset catalogs are used correctly, an app that's installed on an iPhone 6 Plus or 6s Plus, only needs to download the 3x scale images and not worry about the 1x and 2x scales. For on-demand resources, the 20GB of total resources you can upload to the App Store servers is the total amount across all device types. All of the other limits are for each specific device your app is being installed on.
Deleting On-Demand Resources
In terms of data deletion (purging), asset packs that your app has downloaded will only be removed when the device your app is installed on is running out of available space. When this happens, the on-demand resources system will look at all apps on the device and upon selecting one will look at the preservation property of each asset pack as well as when it was last used. One important thing to note is that asset packs for your app will never be purged while your app is running.
2. Assigning and Organizing Tags
Open the starter project in Xcode and run the app in the iOS Simulator. At the moment, this basic app contains a collection of images each with a combination of one of three colors (red, green, or blue) and one of four shapes (circle, square, star, or hexagon). With the app running, navigate to Colors > Red and you will see a single red circle image displayed on the screen.
In this app, we are going to set up a total of seven asset packs, one for each color and one for each shape. Another great feature of on-demand resources is that a single resource can be assigned more than one tag. The red circle, for example, can be a part of both the Red asset pack and Circle asset pack.
The on-demand resources API is also smart enough to not download or copy the same resource twice. In other words, if an application had already downloaded the Red asset pack and then wanted to load the Circle asset pack, the red circle image would not be downloaded again.
In Xcode, open Assets.xcassets. You should see all twelve images as shown below.
Next, select the Blue Square image set and open the Attributes Inspector on the right.
You will see that the Attributes Inspector includes a new On Demand Resource Tags section, which is where you assign tags to each resource. For the blue square image set, enter Blue and Square in the On Demand Resource Tags field. This means that the image set now has two tags assigned to it.
Note that the starter project already includes resource tags for nine of the twelve image sets. This explains why Xcode provides autocompletion options for you when you entered these tags.
Once you have completed assigning tags for the Blue Square image set, add the correct tags to both the Green Hexagon and Red Circle image sets as shown below.
With the on-demand resource tags correctly set up, open to the Project Navigator on the left. Open the Resource Tags tab at the top and select the Prefetched filter at the top.
You can now see how large each asset pack is and exactly what resources are in each one. The All filter shows you each of the on-demand resources. The Prefetched filter shows the on-demand resources per category and it lets you move resources from one category to another:
Initial Install Tags
Prefetched Tag Order
Download Only On Demand
These sections mirror the three categories of asset packs that I outlined earlier. One important thing to note is that the asset packs you put in the Prefetched Tag Order section will begin downloading in the order that they appear in.
With tags assigned to each image set, it is time to start accessing the resources in the project.
3. Accessing Resources on Demand
Accessing asset packs that are hosted on the App Store servers is handled by the new NSBundleResourceRequest class. An instance of this class is created with a set of tags that you want to use. It tells the system about your usage of the corresponding asset packs. The deallocation of these NSBundleResourceRequest objects is the best and easiest way of telling the operating system when you are no longer using a particular asset pack. This is important so that you don't exceed the 2GB limit for resources that are in use.
In your project, open DetailViewController.swift and add the following property to the DetailViewController class.
var request: NSBundleResourceRequest!
Next, replace your viewDidAppear(_:) method with the following:
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
request = NSBundleResourceRequest(tags: [tagToLoad])
request.beginAccessingResourcesWithCompletionHandler { (error: NSError?) -> Void in
// Called on background thread
if error == nil {
NSOperationQueue.mainQueue().addOperationWithBlock({ () -> Void in
self.displayImages()
})
}
}
}
With this code, you first initialize the request property with a set that includes a single tag. The set of tags you provide to this initializer contains string values. In this case, we use the tagToLoad property, which is set by the previous view controllers in the application.
Next, we begin downloading the asset packs for the specified tags by calling beginAccessingResourcesWithCompletionHandler(_:). This method will access all resources with the specified tags and will automatically start a download if needed. After accessing the resources in this manner, all of your other code for loading these resources into your app remains the same.
Note that if you wish to only access resources that have already been downloaded, without loading content, you can use the conditionallyBeginAccessingResourcesWithCompletionHandler(_:) method.
As shown in the code above, one important thing to remember about this completion handler is that it is called on a background thread. This means that any user interface updates you want to make upon completion will need to be executed on the main thread.
Build and run your app again and choose a color or shape to view in the app. You should see all three colored images for a specific shape or all four shapes for a specific color.
That's how simple it is to use on-demand resources. You have now successfully implemented on-demand resources in an application.
An important debugging feature available in Xcode 7 is the ability to see which asset packs you have downloaded and which ones are in use. To view this, navigate to the Debug Navigator with your app running and select Disk. You will see a screen similar to the one shown below. On Demand Resources is the section that we're interested in.
As an example, let's now change the download priority so that some resources are always downloaded immediately. At the same time, we will change the preservation priorities of the asset packs so that the Hexagon and Star asset packs are purged before the Circle and Square asset packs. Update the implementation of the viewDidAppear(_:) method as shown below.
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
request = NSBundleResourceRequest(tags: [tagToLoad])
request.loadingPriority = NSBundleResourceRequestLoadingPriorityUrgent
NSBundle.mainBundle().setPreservationPriority(1.0, forTags: ["Circle", "Square"])
NSBundle.mainBundle().setPreservationPriority(0.5, forTags: ["Hexagon", "Star"])
request.beginAccessingResourcesWithCompletionHandler { (error: NSError?) -> Void in
// Called on background thread
if error == nil {
NSOperationQueue.mainQueue().addOperationWithBlock({ () -> Void in
self.displayImages()
})
}
}
}
After initializing the request, we set the loadingPriority property to NSBundleResourceRequestLoadingPriorityUrgent. Alternatively, you can assign any number between 0.0 and 1.0 to this property in order to dictate the loading priority within your app.
The advantage of using this constant is that it automatically gives the request the highest loading priority, but it also disregards current CPU activity. In some situations, if the device's CPU is being used heavily, the download of an asset pack can be delayed.
Next, we set the preservation priority for all four shape tags. This is done by calling the setPreservationPriority(_:forTags:) method the application's main bundle. We've now ensured that, if the on-demand resources system needs to purge some assets from our app, the Hexagon and Star asset packs will be deleted first.
4. Best Practices
Now that you know how to implement on-demand resources in an iOS application, I want to tell you briefly about a few best practices to keep in mind.
Keep Individual Tags as Small as Possible
In addition to reducing download times and making your resources more accessible, keeping each asset pack as small as possible prevents the system from over-purging. This is when the on-demand resources system needs to free up a certain amount of space and ends up freeing a lot more than necessary.
For example, if the system needed to free up 50MB of space and, based on the conditions mentioned earlier, decided that a 400MB asset pack from your app was the most suitable to be deleted, the system would over-purge 350MB. This means that if your app has lost more data than it needed to, it will need to download all of the resources associated with that tag again. The recommended size for individual tags is approximately 64MB.
Download Tags in Advance
If your app has a very predictable user interaction, then it is best to start downloading resources before they are actually needed. This is to improve the user's experience as they then don't have to stare at a loading screen while your app downloads content.
Games are a common example. If the player has just completed level 5, then it's a good idea to start downloading level 7 while she plays level 6.
Stop Accessing Resources Correctly
When you are done using a particular asset pack, make sure that either your NSBundleResourceRequest object is deallocated or you call the endAccessingResources method on it.
Not only will this avoid your application hitting the 2GB limit for in-use resources, it will also help the on-demand resources system to know when your app uses those resources, which means that it can better decide what to purge if more space is needed.
5. On-Demand Resources for tvOS
I recently wrote about tvOS development and in that tutorial I mentioned the limitations of tvOS applications. The maximum size for an app bundle is 200MB so it is highly recommended that you utilize on-demand resources in your tvOS apps whenever possible.
Due to the similarities of tvOS and iOS, the APIs and storage limits (except for the app bundle) for on-demand resources are the same. When working with on-demand resources on tvOS, however, it is also important to remember that all assets, such as images, have a single 1x scale version so the size of your asset packs as shown in Xcode will not decrease due to app slicing.
Conclusion
On-demand resources in iOS 9 and tvOS is a great way to reduce the size of your app and deliver a better user experience to people who download and use your application. While it's very easy to implement and set up, there are quite a few details that you must keep in mind in order for the whole on-demand resources system to work flawlessly without excessive loading times and unnecessarily purging data.
As always, be sure to leave your comments and feedback in the comments below.
Alongside iOS 9 and watchOS 2, Apple introduced on-demand resources, a new API for delivering content to your applications while also reducing the amount of space the application takes up on the user's device. With on-demand resources, you can tag specific assets of your application, have them hosted on Apple's servers, allowing your users to download them when needed. In this tutorial, I am going show you the basics of on-demand resources by creating a basic image viewer application.
Prerequisites
This tutorial requires that you are running Xcode 7+ and are familiar with iOS development. You will also need to download the starter project GitHub.
1. On-Demand Resources
Benefits
On-demand resources were introduced in iOS 9 and watchOS 2 for the main purpose of reducing the amount of space individual apps take up on a device. Another important advantage of on-demand resources is that your app can be downloaded and opened by users much quicker.
On-demand resources work by assigning unique tags to resources within Xcode to create what's called an asset pack. These packs can include anything from asset catalogs (images, SpriteKit textures, data, etc.) or even just other files, such as OpenGL and Metal shaders as well as SpriteKit and SceneKit scenes and particle systems.
When you submit your app to the App Store, these resources are also uploaded and are hosted there in order to be downloaded at any time. To download asset packs at runtime in an app, you simply use the tag for each pack that you assigned in Xcode.
Categories
The two main aspects of an app that uses on-demand resources are the app bundle, which is full of executable code for your app and essential assets, such as user interface icons, and asset packs.
For these asset packs, there are three main categories which you can organize in Xcode:
Initial Install: This is for content that is needed for your app to run for the first time, but that can be deleted later. This could include the first few levels of a game, which are no longer needed once the player progresses far enough into the game.
Prefetched: This category includes content that you want to be downloaded immediately after your app has finished installing. This type of content is recommended for resources that are not required for your app to function after installing it, but that are needed for a better user experience. A good example are tutorials for a game.
On Demand: This category is aimed at content that you need at a later time and your app can function without. When working with on-demand resources, this is the most common type of category that you will use.
Limits
Apps that are built with support for on-demand resources must also stick to the following limits with regards to file size:
2GB for the iOS app bundle
2GB for the initial install tags
2GB for the prefetched tags
2GB for in-use resources. This is only important when your application is running and using on-demand resources.
512MB for each individual asset pack. No single tag can contain more than this amount of data. If you go over this limit, Xcode will give you a warning and will allow you to still test and develop your app. Any submission attempts to the App Store, however, will fail.
20GB for all the resources hosted by Apple. This is the total amount of resources your app can possibly download at any one time. While only 2GB can be used at any one time, if a user's device has enough storage, up to 20GB of your resources can be downloaded and made accessible to your app at any time.
App Slicing
Note that the 20GB total does not account for app slicing while all of the other totals do. What is app slicing? App slicing is another feature that was introduced in iOS 9 to reduce the size of applications. It does this by only looking at the resources specific to the device that the app is being installed on. For example, if asset catalogs are used correctly, an app that's installed on an iPhone 6 Plus or 6s Plus, only needs to download the 3x scale images and not worry about the 1x and 2x scales. For on-demand resources, the 20GB of total resources you can upload to the App Store servers is the total amount across all device types. All of the other limits are for each specific device your app is being installed on.
Deleting On-Demand Resources
In terms of data deletion (purging), asset packs that your app has downloaded will only be removed when the device your app is installed on is running out of available space. When this happens, the on-demand resources system will look at all apps on the device and upon selecting one will look at the preservation property of each asset pack as well as when it was last used. One important thing to note is that asset packs for your app will never be purged while your app is running.
2. Assigning and Organizing Tags
Open the starter project in Xcode and run the app in the iOS Simulator. At the moment, this basic app contains a collection of images each with a combination of one of three colors (red, green, or blue) and one of four shapes (circle, square, star, or hexagon). With the app running, navigate to Colors > Red and you will see a single red circle image displayed on the screen.
In this app, we are going to set up a total of seven asset packs, one for each color and one for each shape. Another great feature of on-demand resources is that a single resource can be assigned more than one tag. The red circle, for example, can be a part of both the Red asset pack and Circle asset pack.
The on-demand resources API is also smart enough to not download or copy the same resource twice. In other words, if an application had already downloaded the Red asset pack and then wanted to load the Circle asset pack, the red circle image would not be downloaded again.
In Xcode, open Assets.xcassets. You should see all twelve images as shown below.
Next, select the Blue Square image set and open the Attributes Inspector on the right.
You will see that the Attributes Inspector includes a new On Demand Resource Tags section, which is where you assign tags to each resource. For the blue square image set, enter Blue and Square in the On Demand Resource Tags field. This means that the image set now has two tags assigned to it.
Note that the starter project already includes resource tags for nine of the twelve image sets. This explains why Xcode provides autocompletion options for you when you entered these tags.
Once you have completed assigning tags for the Blue Square image set, add the correct tags to both the Green Hexagon and Red Circle image sets as shown below.
With the on-demand resource tags correctly set up, open to the Project Navigator on the left. Open the Resource Tags tab at the top and select the Prefetched filter at the top.
You can now see how large each asset pack is and exactly what resources are in each one. The All filter shows you each of the on-demand resources. The Prefetched filter shows the on-demand resources per category and it lets you move resources from one category to another:
Initial Install Tags
Prefetched Tag Order
Download Only On Demand
These sections mirror the three categories of asset packs that I outlined earlier. One important thing to note is that the asset packs you put in the Prefetched Tag Order section will begin downloading in the order that they appear in.
With tags assigned to each image set, it is time to start accessing the resources in the project.
3. Accessing Resources on Demand
Accessing asset packs that are hosted on the App Store servers is handled by the new NSBundleResourceRequest class. An instance of this class is created with a set of tags that you want to use. It tells the system about your usage of the corresponding asset packs. The deallocation of these NSBundleResourceRequest objects is the best and easiest way of telling the operating system when you are no longer using a particular asset pack. This is important so that you don't exceed the 2GB limit for resources that are in use.
In your project, open DetailViewController.swift and add the following property to the DetailViewController class.
var request: NSBundleResourceRequest!
Next, replace your viewDidAppear(_:) method with the following:
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
request = NSBundleResourceRequest(tags: [tagToLoad])
request.beginAccessingResourcesWithCompletionHandler { (error: NSError?) -> Void in
// Called on background thread
if error == nil {
NSOperationQueue.mainQueue().addOperationWithBlock({ () -> Void in
self.displayImages()
})
}
}
}
With this code, you first initialize the request property with a set that includes a single tag. The set of tags you provide to this initializer contains string values. In this case, we use the tagToLoad property, which is set by the previous view controllers in the application.
Next, we begin downloading the asset packs for the specified tags by calling beginAccessingResourcesWithCompletionHandler(_:). This method will access all resources with the specified tags and will automatically start a download if needed. After accessing the resources in this manner, all of your other code for loading these resources into your app remains the same.
Note that if you wish to only access resources that have already been downloaded, without loading content, you can use the conditionallyBeginAccessingResourcesWithCompletionHandler(_:) method.
As shown in the code above, one important thing to remember about this completion handler is that it is called on a background thread. This means that any user interface updates you want to make upon completion will need to be executed on the main thread.
Build and run your app again and choose a color or shape to view in the app. You should see all three colored images for a specific shape or all four shapes for a specific color.
That's how simple it is to use on-demand resources. You have now successfully implemented on-demand resources in an application.
An important debugging feature available in Xcode 7 is the ability to see which asset packs you have downloaded and which ones are in use. To view this, navigate to the Debug Navigator with your app running and select Disk. You will see a screen similar to the one shown below. On Demand Resources is the section that we're interested in.
As an example, let's now change the download priority so that some resources are always downloaded immediately. At the same time, we will change the preservation priorities of the asset packs so that the Hexagon and Star asset packs are purged before the Circle and Square asset packs. Update the implementation of the viewDidAppear(_:) method as shown below.
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
request = NSBundleResourceRequest(tags: [tagToLoad])
request.loadingPriority = NSBundleResourceRequestLoadingPriorityUrgent
NSBundle.mainBundle().setPreservationPriority(1.0, forTags: ["Circle", "Square"])
NSBundle.mainBundle().setPreservationPriority(0.5, forTags: ["Hexagon", "Star"])
request.beginAccessingResourcesWithCompletionHandler { (error: NSError?) -> Void in
// Called on background thread
if error == nil {
NSOperationQueue.mainQueue().addOperationWithBlock({ () -> Void in
self.displayImages()
})
}
}
}
After initializing the request, we set the loadingPriority property to NSBundleResourceRequestLoadingPriorityUrgent. Alternatively, you can assign any number between 0.0 and 1.0 to this property in order to dictate the loading priority within your app.
The advantage of using this constant is that it automatically gives the request the highest loading priority, but it also disregards current CPU activity. In some situations, if the device's CPU is being used heavily, the download of an asset pack can be delayed.
Next, we set the preservation priority for all four shape tags. This is done by calling the setPreservationPriority(_:forTags:) method the application's main bundle. We've now ensured that, if the on-demand resources system needs to purge some assets from our app, the Hexagon and Star asset packs will be deleted first.
4. Best Practices
Now that you know how to implement on-demand resources in an iOS application, I want to tell you briefly about a few best practices to keep in mind.
Keep Individual Tags as Small as Possible
In addition to reducing download times and making your resources more accessible, keeping each asset pack as small as possible prevents the system from over-purging. This is when the on-demand resources system needs to free up a certain amount of space and ends up freeing a lot more than necessary.
For example, if the system needed to free up 50MB of space and, based on the conditions mentioned earlier, decided that a 400MB asset pack from your app was the most suitable to be deleted, the system would over-purge 350MB. This means that if your app has lost more data than it needed to, it will need to download all of the resources associated with that tag again. The recommended size for individual tags is approximately 64MB.
Download Tags in Advance
If your app has a very predictable user interaction, then it is best to start downloading resources before they are actually needed. This is to improve the user's experience as they then don't have to stare at a loading screen while your app downloads content.
Games are a common example. If the player has just completed level 5, then it's a good idea to start downloading level 7 while she plays level 6.
Stop Accessing Resources Correctly
When you are done using a particular asset pack, make sure that either your NSBundleResourceRequest object is deallocated or you call the endAccessingResources method on it.
Not only will this avoid your application hitting the 2GB limit for in-use resources, it will also help the on-demand resources system to know when your app uses those resources, which means that it can better decide what to purge if more space is needed.
5. On-Demand Resources for tvOS
I recently wrote about tvOS development and in that tutorial I mentioned the limitations of tvOS applications. The maximum size for an app bundle is 200MB so it is highly recommended that you utilize on-demand resources in your tvOS apps whenever possible.
Due to the similarities of tvOS and iOS, the APIs and storage limits (except for the app bundle) for on-demand resources are the same. When working with on-demand resources on tvOS, however, it is also important to remember that all assets, such as images, have a single 1x scale version so the size of your asset packs as shown in Xcode will not decrease due to app slicing.
Conclusion
On-demand resources in iOS 9 and tvOS is a great way to reduce the size of your app and deliver a better user experience to people who download and use your application. While it's very easy to implement and set up, there are quite a few details that you must keep in mind in order for the whole on-demand resources system to work flawlessly without excessive loading times and unnecessarily purging data.
As always, be sure to leave your comments and feedback in the comments below.
Last year, Google introduced Material Design and it became clear that motion and animation would be two of the most eye-catching features in modern Android applications. But Google didn't provide developers with an easy solution to integrate them in applications. As a result, many libraries were developed to solve the integration problem.
During this year's Google I/O, however, Google introduced the Android Design Support Library to make the adoption of Material Design easier. This lets developers focus on the features that make their applications unique.
1. Regions
In this tutorial, I'll show you how to implement the scrolling techniques shown in Google’s Material Design specification. Before we start, you should familiarize yourself with the available scrollable regions in an Android application. In the following image, you can see that there are four regions.
Status Bar
This is where notifications appear and the status of different features of the device are displayed.
Toolbar
The toolbar was formerly known as the action bar. It is now a more customizable view with the same functionalities.
Tab/Search Bar
This optional region is used to display the tabs that categorize the content of your application. You can read more about the usage of tabs and the different ways to display them in Google's Material Design specification. When suitable, you can also use it in Google’s lateral navigation.
Flexible Space
This is where you can display images or extended app bars.
With regards to scrolling techniques, it's the toolbar and the tab/search bar that respond when the content of your application is scrolling.
2. Project Setup
To follow along, you should be using the latest version of Android Studio. You can get it from the Android Developer website. To try these scrolling techniques, I recommend creating a new project (with a minimum API level of 15), because your application's layout will change significantly.
I've provided a starter project, which you can download from GitHub. You can use the starter project as a starting point and use the scrolling techniques in your own applications. Let's first add the following dependencies to your project's build.gradle file inside the app folder:
With the first dependency, you get the Android Design Support Library, which includes the new classes we need for this tutorial.
With the second dependency, you get the latest version of RecyclerView. The version listed in the official article about creating lists won't be useful this time.
Next, you are going to need some dummy data to try these techniques and populate the RecyclerView. You can implement them yourself or copy the implementation from the InitialActivity class in the starter project.
3. Scrolling Technique 1
This technique hides the toolbar region when your application's content is being scrolled. You can see the technique in action in the following video.
For this layout design, you may think of something like this:
The problem with this layout is that you have to manage the events yourself, but it will be painless if you take advantage of the new classes. Let's modify it as follows:
the RelativeLayout is replaced with a CoordinatorLayout
the Toolbar is wrapped in a AppBarLayout
the Toolbar and RecyclerView received a few additional attributes
What are these new classes?
CoordinatorLayout
This layout is a new container and a supercharged FrameLayout that provides an additional level of control over touch events between child views.
AppBarLayout
This layout is another new container, designed specifically to implement many of the features of the Material Design app bar concept. Keep in mind that if you use it within another ViewGroup, most of its functionality won't work.
The key to this scrolling technique, and most other scrolling techniques that we'll discuss, is the CoordinatorLayout class. This special class can receive events from and deliver events to its child views in order for them to respond appropriately. It is designed to be used as the root container view.
To enable this technique, the app:layout_behavior attribute indicates which view will trigger the events in the Toolbar. In this case, that's the RecyclerView.
The app:layout_scrollFlags attribute of the Toolbar indicates to the view how to respond.
app:layout_scrollFlags=“scroll|enterAlways"
The app:layout_scrollFlags attribute can have four possible values, which can be combined to create the desired effect:
scroll
This flag should be set for all views that need to scroll off-screen. Views that don't use this flag remain pinned to the top of the screen.
enterAlways
This flag ensures that any downward scroll will cause this view to become visible, enabling the quick return pattern.
enterAlwaysCollapsed
When a view has declared a minHeight and you use this flag, the view will only enter at its minimum height (collapsed), only expanding to its full height when the scrolling view has reached its top.
exitUntilCollapsed
This flag causes the view to scroll off-screen until it is collapsed (its minHeight is reached) before exiting.
You can now run the project, or press Control+R, and see this technique in action.
4. Scrolling Technique 2
This technique scrolls the toolbar off-screen while the tab bar region stays anchored to the top. You can see this technique in action in the following video.
For this technique, I'm going to reuse the layout from the previous technique and add a TabLayout view next to the Toolbar, inside the AppBarLayout.
The TabLayout view provides a horizontal layout to display tabs. You can add any number of tabs using the newTab method and set its behavior mode using the setTabMode. Let's start by populating the tabs.
By changing the value of the app:layout_scrollFlags attribute, and adding and removing it from the Toolbar and TabLayout, you can get animations like those used in:
Google Play Store where the toolbar hides and the tab bar remains visible.
Foursquare where the tab bar scrolls off-screen while the toolbar stays at the top.
Play Music where both the toolbar and the tab bar scroll off-screen.
Take a look at the following videos for examples of this scrolling technique.
You can run your project and see this scrolling technique in action.
5. Scrolling Technique 3
For this scrolling technique, I'm going to make use of the flexible space region I mentioned in the beginning of this tutorial. I do this to shrink the initial height of the AppBarLayout as the content is scrolling up. The height of the AppBarLayout increases to its original height as the content is scrolled down. You can see this technique in action in the following video.
For this scrolling technique, I'm going to use the following layout:
It certainly looks like a lot of code, so let's break it down. In this layout, I made the following changes:
The Toolbar is wrapped in a CollapsingToolBarLayout and both elements are put in the AppBarLayout.
The app:layout_scrollFlags attribute is moved from the Toolbar to the CollapsingToolBarLayout, because this container is now in charge of responding to scroll events.
A new attribute, app:layout_collapseMode, was added to the Toolbar. This attribute ensures that the Toolbar remains pinned to the top of the screen.
The AppBarLayout has a fixed initial height of 192dp.
A FloatingActionButton was added to the layout, below the RecyclerView.
What are these new classes for?
CollapsingToolBarLayout
This is a new view, designed specifically for wrapping the Toolbar and implement a collapsing app bar. When using the CollapsingToolBarLayout class, you must pay special attention to the following attributes:
app:contentScrim
This attribute specifies the color to display when it is fully collapsed.
These attributes specify the margins of the expanded title. They are useful if you plan to use the setDisplayHomeAsUpEnabled method in your activity and fill the new spaces created around the title.
FloatingActionButton
The floating action button is an important component of Material Design apps. You can now include floating action buttons in your layout with only a few lines of code. You can use the app:fabSize attribute to choose from two different sizes, standard (56dp) and mini (40dp). Standard is the default size.
The disappearing effect is achieved automatically by anchoring the floating action button to the AppBarLayout using the app:layout_anchor attribute. You can also specify the position relative to this anchor by using the app:layout_anchorGravity attribute.
Before running the project, we need to specify in the activity that the CollapsingToolBarLayout is going to display the title instead of the Toolbar. Take a look at the following code snippet for clarification.
Run the project to see the third scrolling technique in action.
6. Scrolling Technique 4
This scrolling technique uses the extended AppBarLayout, shown in the previous technique, to display an image. You can see this technique in the following video.
For this technique, I'm going to reuse the previous layout and modify it slightly:
In this layout, I made the following modifications:
The android:background attribute was removed from the AppBarLayout. Because the ImageView is going to fill this space, there's no need to have a background color.
The app:expandedTitleMarginStart and app:expandedTitleMarginEnd attributes were removed, because we're not using the setDisplayHomeAsUpEnabled method in the activity.
An ImageView was added before the Toolbar. This is important to avoid that the AppBarLayout shows a part of the image instead of the primary color when it is collapsed.
You may also have noticed that the ImageView has the app:layout_collapseMode attribute. The value of the attribute is set to parallax to implement parallax scrolling. In addition, you could also add the app:layout_collapseParallaxMultiplier attribute to set a multiplier.
These are all the changes you have to do to get this scrolling technique running smoothly in your app. Run the project to see this scrolling technique in action.
7. Scrolling Technique 5
For this scrolling technique, the flexible space is overlapped by the content of the app and is scrolled off-screen when the content is scrolled. You can see this technique in action in the following video.
For this technique, you can reuse the layout from the previous technique, with a few little modifications.
The ImageView and the FloatingActionButton inside the CollapsingToolbarLayout were removed. This technique does not require an image.
In the CollapsingToolbarLayout, the app:contentScrim attribute was replaced with the android:background attribute. We do this, because the background color needs to match the Toolbar background color nicely when disappearing.
The android:background attribute was added to the Toolbar.
The app:behavior_overlapTop attribute was added to the RecyclerView. This is the most important attribute for this scrolling technique as this attribute specifies the amount of overlap the view should have with the AppBarLayout. For this attribute to have effect, it should be added to the same view that has the app:layout_behavior attribute.
If you try to use this scrolling technique with these modifications, then the resulting layout won't have a title in the Toolbar. To solve this, you could create a TextView and add it to the Toolbar programmatically.
TextView text = new TextView(this);
text.setText(R.string.title_activity_technique5);
text.setTextAppearance(this, android.R.style.TextAppearance_Material_Widget_ActionBar_Title_Inverse);
toolbar.addView(text);
Conclusion
Note that you don't need to implement every one of these techniques in your app. Some will be more useful to your design than others. Now that you know how to implement each one, you can choose and experiment with them.
I hope you found this tutorial useful. Don't forget to share it if you liked it. You can leave any comments and questions below.
One of the most useful features for users is maps integration. In the previous installment of this series, we discussed how to set up Google Maps for Android using the Google Developer Console and how to create a basic Google Maps fragment. We then went over adding different kinds of markers and how to draw on the map.
In this tutorial, you will expand on what you learned in the last article in order to lay views on top of a map, override the indoor level selector controls, and add a Street View component to your applications. The source code for this article can be found on GitHub.
1. Getting Set Up
To start, follow the steps listed in the previous article of this series in order to create a basic project using a MapFragment, attach it to an Activity, and activate the Google Maps API through the Google Developers Console. For this tutorial, you don't need to use the locations Play Services classes, but you do need to import maps Play Services library into your build.gradledependencies node.
Once that's done, you end up with a screen that looks like the following:
Next, you need to set up your camera. For this tutorial, we will focus on Madison Square Garden in New York City, because it's a great example of a building using the indoor level maps.
In onViewCreated, you can add a call to the the following helper method initCamera. You may remember that we need to wait until onViewCreated to work with Google Maps, because this is when we know the map object is ready for use.
The above method moves the camera to our target and zooms in close enough that the indoor selector becomes visible. You'll notice that there's a strip of numbers on the right side of the screen and an overlay on the map for each floor. When you select a different level on the right, the current floor plan animates into the new one. This is the feature that you will work with later in order to have your own view control level selection.
Next, you need to implement the three interfaces that will be used in this tutorial.
GoogleMap.OnIndoorStateChangeListener is used for determining when an indoor level selector has changed visibility.
SeekBar.OnSeekBarChangeListener is used with one of our view overlays to control level selection, rather than using the default set of buttons on the right.
GoogleMap.OnMapLongClickListener is used in this example for changing the displayed location of your Street View component.
public class MapFragment extends SupportMapFragment implements
GoogleMap.OnIndoorStateChangeListener,
GoogleMap.OnMapLongClickListener,
SeekBar.OnSeekBarChangeListener {
Once you have added the required methods for those three interfaces, you can begin adding views on top of the map.
2. Overlaying Views
While the base features of Google Maps fit most needs, there will be times that you want to add additional views over the map in order to perform actions. For this tutorial, we will add a SeekBar and some TextView objects in order to customize the controls for the indoor level selector.
Start by creating a new XML layout file, view_map_overlay.xml. Add the following code to create the base layout that will be used on the screen.
Once your layout file is complete, you can add it as an overlay to your maps fragment. In onCreateView, you need to access the ViewGroup parent, inflate your new layout overlay, and attach it to the parent. This is also where you save references to each of the views in your overlay so that they can be changed later in your app.
When you run the application, you should see your views on top of the map. You will, however, also still see the default level selector, which clutters up the view.
In order to fix this, create a new method named initMapIndoorSelector and call it from onViewCreated. All it needs to do, is set your listeners for the SeekBar and indoor level changes, as well as disable the default indoor level picker.
private void initMapIndoorSelector() {
mIndoorSelector.setOnSeekBarChangeListener( this );
getMap().getUiSettings().setIndoorLevelPickerEnabled( false );
getMap().setOnIndoorStateChangeListener( this );
}
Now that you have your view overlaying the map, you have to hide it until it's needed. In onViewCreated, call a new helper method named hideFloorLevelSelector that hides all of your overlaid views.
With your views created and hidden, you can start adding in the logic to make your views appear when needed and interact with the map. Earlier, you created the onIndoorBuildingFocused method as a part of the GoogleMap.OnIndoorStateChangeListener. In that method, you need to save a reference to whichever building is in focus and then hide or show the SeekBar controls when necessary.
An indoor building will gain focus when the building is visible to the map camera and the map is zoomed in enough. If those conditions are no longer met, this method will be called again and getMap().getFocusedBuilding will return a null value.
showFloorLevelSelector makes all of the overlaid views visible, moves the SeekBar to the proper selected value, and sets the text labels to values representing the short name of the top and bottom floors for that building. When you retrieve the levels from an IndoorBuilding object, the bottom floor is the last item in the list and the top floor is at position 0.
private void showFloorLevelSelector() {
if( mIndoorBuilding == null )
return;
int numOfLevels = mIndoorBuilding.getLevels().size();
mIndoorSelector.setMax( numOfLevels - 1 );
//Bottom floor is the last item in the list, top floor is the first
mIndoorMaxLevel.setText( mIndoorBuilding.getLevels().get( 0 ).getShortName() );
mIndoorMinLevel.setText( mIndoorBuilding.getLevels().get( numOfLevels - 1 ).getShortName() );
mIndoorSelector.setProgress( mIndoorBuilding.getActiveLevelIndex() );
mIndoorSelector.setVisibility( View.VISIBLE );
mIndoorMaxLevel.setVisibility( View.VISIBLE );
mIndoorMinLevel.setVisibility( View.VISIBLE );
}
The final method you need to implement for your indoor level selector is onProgressChanged(SeekBar seekBar, int progress, boolean fromUser). When the SeekBar position is changed, you need to activate a new level on the current building. Since the levels are ordered from top to bottom, you need to activate the level at position numOfLevels - 1 - progress in order to correlate with the position of the SeekBar.
@Override
public void onProgressChanged(SeekBar seekBar, int progress, boolean b) {
if( mIndoorBuilding == null )
return;
int numOfLevels = mIndoorBuilding.getLevels().size();
mIndoorBuilding.getLevels().get( numOfLevels - 1 - progress ).activate();
}
4. Adding Street View
Now that you know how to overlay views on your a map and how to work with the indoor level selector, let's jump into how to work with Street View in your apps. Just like Google Maps, Street View allows you to either use a fragment or a view. For this example, you will use a StreetViewPanoramaView and overlay it onto your MapFragment.
This view will be initialized to show the street next to Madison Square Garden and when you long-press on a different area of the map, Street View will display images associated with the selected position. If you select to display an area that isn't directly connected to a Street View image, Google will pick the nearest to display if it's within a set distance. If no Street View images are nearby (say you select a location in the middle of the ocean), then Street View will show a black screen.
Something else to be aware of is that you can only have one StreetViewPanoramaView or fragment visible to the user at a time.
To start, update view_map_overlay.xml in order to add a StreetViewPanoramaView.
When your layout file is ready, go into onCreateView in your MapFragment, save a reference to your new view, and call the onCreate method for the view. It's important that you call onCreate, because the current fragment's onCreate has already been called before this view was attached, and the Street View component performs actions in onCreate that are necessary for initialization.
Next, in onViewCreated, add a new method called initStreetView. This new method will asynchronously get the StreetViewPanorama object when it's ready and handle showing your initial Street View position. It's important to note that getStreetViewPanoramaAsync( OnStreetViewPanoramaReadyCallback callback ) can only be called from the main thread.
private void initStreetView() {
getMap().setOnMapLongClickListener( this );
mStreetViewPanoramaView.getStreetViewPanoramaAsync(new OnStreetViewPanoramaReadyCallback() {
@Override
public void onStreetViewPanoramaReady(StreetViewPanorama panorama) {
mPanorama = panorama;
showStreetView( new LatLng( 40.7506, -73.9936 ) );
}
});
}
Finally, you need to define the showStreetView( LatLng latlng ) helper method shown above. This method creates a StreetViewPanoramaCamera object that allows you to change the tilt, zoom, and bearing of the Street View camera. For this example, the camera is set to the default values.
Next, you need to set the camera position. In this example, we also turn on an optional setting to show street names.
Once your showStreetView( LatLng latlng ) method is complete, it can also be called from onMapLongClick(LatLng latLng) so you can easily change what area is being shown.
@Override
public void onMapLongClick(LatLng latLng) {
showStreetView( latLng );
}
Conclusion
In this tutorial, you learned about some advanced ways you can interact with Google Maps by adding additional views to the MapFragment and you learned how to control the indoor building level selector. We also covered the basics of adding Street View functionality to your application in order to display a different point of view for your users.
In the next installment of this series, you will learn about the Google Maps Utilities library and how to use it to add marker clusters, heat maps, and other useful features for your applications.
Watch Connectivity is a new communication framework released alongside iOS 9 and watchOS 2. It's main purpose is to easily and seamlessly transfer information between an Apple Watch application and its parent iOS application.
The framework provides many different functionalities. A few weeks ago, Jorge Costa wrote about the ability to send messages between an iOS and an Apple Watch application. In this tutorial, we will zoom in on transferring data in the background.
The ability to send messages is designed for data that is needed immediately by the other device. In contrast, background transfers are best suited for larger chunks of data that are not needed immediately by the counterpart. An exception to this is with complication information, which we'll discuss later in this tutorial.
Prerequisites
This tutorial requires that you are running Xcode 7 on OS X 10.10 or later. You will also need to download the starter project from GitHub.
1. Framework Setup
In order to use the Watch Connectivity framework, both your iOS and watchOS app need to have a class that conforms to the WCSessionDelegate protocol and that correctly configures the default WCSession. The methods of the WCSessionDelegate protocol handle the receiving of all data via the Watch Connectivity framework and enables you to take control of the new data in your application.
Open the starter project in Xcode and edit AppDelegate.swift. At the top, add the following import statement:
import WatchConnectivity
Next, update the class definition of the AppDelegate class to make it conform to the WCSessionDelegate protocol.
class AppDelegate: UIResponder, UIApplicationDelegate, WCSessionDelegate {
We also declare a property of type WCSession! in the AppDelegate class to store a reference to the default WCSession object.
var session: WCSession!
Finally, update the application(_:didFinishLaunchingWithOptions:) method as shown below.
In application(_:didFinishLaunchingWithOptions:), we get a reference to the default WCSession object, set the session's delegate to your app's AppDelegate instance and, if supported, activate the session. The isSupported class method checks to see whether or not the counterpart watchOS app for your iOS app is installed on a paired Apple Watch and is able to send data.
The setup for the watchOS side is very similar. Open ExtensionDelegate.swift and replace its contents with the following:
You will notice that we don't call isSupported on the WCSession class before activating the session. This is because this method always returns true on the watchOS side.
To check that everything is working correctly, run your Apple Watch app on either of the two simulators as shown below.
Next, run your iOS app on the same iPhone simulator type you selected when running the watch app.
Once your iOS app has launched, your Apple Watch simulator should just go back to the watch face as shown in the screenshot below.
2. Sending Data
With the default WCSession object correctly configured, it is time for us to send some data between the iOS and the Apple Watch application.
Open TableViewController.swift and add the following line of code at the end of the createNewItem(_:) method:
WCSession.defaultSession().transferUserInfo(item)
The transferUserInfo(_:) method accepts a dictionary as its only parameter. Once this method has been called, the user info dictionary you provided is added to the queue of information to be transferred.
Both iOS and watchOS work in conjunction with each other to transfer the information at an opportune time. The combined system looks at things like app usage, battery life, whether or not the other device is currently being used, etc. Once the system has transferred the information, the app on the other device will execute a delegate callback method the next time it is launched.
Now it's time for us to implement the receiving side on the Apple Watch. Open ExtensionDelegate.swift and add the following method to the ExtensionDelegate class:
func session(session: WCSession, didReceiveUserInfo userInfo: [String : AnyObject]) {
dispatch_async(dispatch_get_main_queue()) { () -> Void in
if let items = NSUserDefaults.standardUserDefaults().objectForKey("items") as? [NSDictionary] {
var newItems = items
newItems.append(userInfo)
NSUserDefaults.standardUserDefaults().setObject(newItems as AnyObject, forKey: "items")
} else {
NSUserDefaults.standardUserDefaults().setObject([userInfo] as AnyObject, forKey: "items")
}
}
}
This method will be called as soon as we run the Apple Watch application and once the information has been transferred successfully.
Note that while this tutorial is only showing an example of transferring information from iOS to watchOS, the WCSession and WCSessionDelegate methods behave exactly the same on both platforms for background transfers.
With this code implemented, run your Apple Watch app in the simulator. Next, run the iPhone app again and press the button to create a new item.
Now go back to the Apple Watch simulator and press Command-Shift-H twice to go back to the most recent app. You will see that the item you just created shows up on the Apple Watch.
Note that, while the information transfer happened immediately between the simulators, in a real-world situation with physical devices this will not always be the case.
3. Accessing the Pending Transfer Queue
With your iOS app still running, quit the Apple Watch simulator from menu bar or by pressing Command-Q. After doing this, press the button in your iOS app to create a few more items as shown below.
Whenever you attempt to transfer information using the Watch Connectivity framework, it is added to a queue that is gradually cleared as information is transferred. This queue can be accessed and also the transfers in the queue can be accessed.
This is useful, because you can see how many items are still pending and you can even cancel specific transfers if you need to. The items you just created are currently being held in the user info queue because the Apple Watch is currently disconnected from the parent device, making a transfer impossible.
Open AppDelegate.swift and add the following code at the end of application(_:didFinishLaunchingWithOptions:):
let transfers = session.outstandingUserInfoTransfers
if transfers.count > 0 {
let transfer = transfers.first!
transfer.cancel()
}
With this code, we access the outstanding user info transfers and, if there is at least one, cancel the first transfer. The WCSessionUserInfoTransfer objects returned from the outstandingUserInfoTransfers property also have two read-only properties that you can access:
userInfo: This property stores the dictionary you are transferring.
transferring: This property stores a boolean value and indicates whether the user info is currently being transferred.
There isn't a great deal of functionality available with outstanding information transfers in the Watch Connectivity framework, but depending on your application, some of these features might be very useful.
4. Other Transfer Methods
In this tutorial, we have only covered user info background transfers, but there are a few other ways of transferring data between devices. Each of these methods are designed for a specific purpose when communicating between an iPhone and an Apple Watch.
Application Context
This is where you need to transfer information between devices where only the most recent information is relevant. You transfer a single dictionary by calling the updateApplicationContext(_:error:) method. The error parameter in this method is a pointer to an NSError object, which will be filled with information if a problem occurs with the transfer.
On the receiving side you can implement the session(_:didReceiveApplicationContext:) method or, alternatively, access the application context via the default WCSession object's receivedApplicationContext property.
Complication Information
This is where you need to transfer a single user info dictionary specifically for your app's custom complication. You can only send information from the iOS side and this is done with the transferCurrentComplicationUserInfo(_:) method.
The key difference between this and the transferUserInfo(_:) method used earlier in this tutorial is that, when updating a complication, the system will always attempt to transfer the information immediately.
Note that a transfer is not guaranteed as the devices can be disconnected or your complication may have exceeded its background execution budget. If a complication information transfer can not be completed it is added to the outstandingUserInfoTransfers queue where it can be viewed and cancelled if needed.
Also note that, if a complication info transfer is in the queue and you call the transferCurrentComplicationUserInfo(_:) method again, the existing transfer in the queue will be invalidated and cancelled.
Files
You can even use the Watch Connectivity framework to transfer files between devices. This is done via the transferFile(_:metaData:) method where the first parameter is a local NSURL to the file and the second is an optional dictionary, containing any additional data associated with that file.
As you would expect, receiving of this file is handled by a method of the WCSessionDelegate protocol, the session(_:didReceiveFile:) method to be precise. In this method, you are given a single WCSessionFile object that contains a new local URL to the actual file as well as the metadata you transferred.
As with user info transfers, you can also view pending or file transfers that are in progress via the default WCSession object's outstandingFileTransfers property.
Conclusion
Overall, the Watch Connectivity framework provides a very simple and easy to use interface to transfer data between a connected iPhone and an Apple Watch. The framework enables the transfer of user info, application context, and complication info dictionaries as well as files.
You should now be comfortable with both sending and receiving information using the Watch Connectivity framework as well as how you can interact with any outstanding transfers.
As always, please be sure to leave your comments and feedback in the comments below.
While the standard features of Google Maps are incredibly useful, there will be times that you want to do a little bit more. Luckily, Google has created an open source library containing a set of utilities that Android developers can use to make their applications even better with enhanced maps.
In this tutorial, you will learn how to use this utility library to add heat map visualizations for your data, cluster large numbers of markers for easier viewing, and use various utility methods for working with the spherical nature of the Earth or drawing routes on roads.
The source files for this tutorial can be found on GitHub.
2. Setup
In the first tutorial of this series, I went over how to set up a project using the Google Developer Console and adding an API key to your manifest. For this tutorial, you need to get an API key and set up your project with a manifest as described there.
Next, open build.gradle and add two new dependencies, one for Play Services to use Google Maps and another one for the Google Maps Utils library.
I should note that the Google Maps Utils library is technically still in beta, though it has been available for the last two years. Once you have imported these libraries and synced the project, you need to update the layout file for MainActivity.java so that it uses the custom fragment shown below.
Next, create the UtilsListFragment class that is used above so that it displays a simple list of items representing the various parts of the library you will learn about in this tutorial.
public class UtilsListFragment extends ListFragment {
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
ArrayAdapter<String> adapter = new ArrayAdapter<String>( getActivity(), android.R.layout.simple_list_item_1 );
String[] items = getResources().getStringArray( R.array.list_items );
adapter.addAll( new ArrayList( Arrays.asList(items) ) );
setListAdapter( adapter );
}
@Override
public void onListItemClick(ListView l, View v, int position, long id) {
super.onListItemClick(l, v, position, id);
String item = ( (TextView) v ).getText().toString();
if( getString( R.string.item_clustering ).equalsIgnoreCase( item ) ) {
startActivity( new Intent( getActivity(), ClusterMarkerActivity.class ) );
} else if( getString( R.string.item_heat_map ).equalsIgnoreCase( item ) ) {
startActivity( new Intent( getActivity(), HeatMapActivity.class ) );
} else if( getString( R.string.item_polylines ).equalsIgnoreCase( item ) ) {
startActivity( new Intent( getActivity(), PolylineActivity.class ) );
} else if( getString( R.string.item_spherical_geometry ).equalsIgnoreCase( item ) ) {
startActivity( new Intent( getActivity(), SphericalGeometryActivity.class ) );
}
}
}
Each of the strings are defined and placed into a string-array for uniformity.
Once your list is available, you need to create BaseMapActivity.java, which handles all of the common map related setup for each of the example activities that you will build. This Activity initializes a GoogleMap and zooms the camera in to a specified area. In this case, that area is the city of Denver in Colorado, USA. Everything in this class should look familiar from the last two articles in this series.
Now that you have the initial project built, you can continue on to the next section where you will create a new Activity for each utility that we're going to cover in this tutorial.
3. Heat Maps
Heat maps are an excellent way to visually represent concentrations of data points on a map. The Google Maps Utils library makes it easy to add them to an application. To start, create a new BaseMapActivity named HeatMapActivity and add it to your AndroidManifest.xml file. At the top of that class, declare a HeatmapTileProvider that we'll use to construct the map overlay.
private HeatmapTileProvider mProvider;
In BaseMapActivity, a method named initMapSettings is called that allows you to add your customizations to the map. For this Activity, you need to override that method to get an ArrayList of LatLng objects that is then used to generate the HeatmapTileProvider object.
The provider has various methods that can be used to change the appearance of your heat map, such as the gradient colors, the radius for each point, and the weight of each point. Once your provider is built, you can create the heat map TileOverlay and apply it to your map.
In the above implementation of initMapSettings, generateLocations is a helper method that generates 1000 LatLng positions around the central map location.
private ArrayList<LatLng> generateLocations() {
ArrayList<LatLng> locations = new ArrayList<LatLng>();
double lat;
double lng;
Random generator = new Random();
for( int i = 0; i < 1000; i++ ) {
lat = generator.nextDouble() / 3;
lng = generator.nextDouble() / 3;
if( generator.nextBoolean() ) {
lat = -lat;
}
if( generator.nextBoolean() ) {
lng = -lng;
}
locations.add(new LatLng(mCenterLocation.latitude + lat, mCenterLocation.longitude + lng));
}
return locations;
}
Once you're done implementing initMapSettings and generateLocations, you can run your app and click on the heat map section to see it in action.
4. Clustering Markers
When a map has a lot of data points in a small area, it can get cluttered very quickly as the user zooms out. Not only this, but having too many markers displayed at once can cause some devices to slow down considerably.
In order to help alleviate some of the frustration caused by these issues, you can use the Google Maps Utils library to animate your markers into clusters. The first thing you need to do is create a new model object that implements the ClusterItem interface. This model needs to implement the getPosition method from the ClusterItem interface in order to return a valid LatLng object.
public class ClusterMarkerLocation implements ClusterItem {
private LatLng position;
public ClusterMarkerLocation( LatLng latLng ) {
position = latLng;
}
@Override
public LatLng getPosition() {
return position;
}
public void setPosition( LatLng position ) {
this.position = position;
}
}
With the model created, you can create a new Activity called ClusterMarkerActivity and add it to your manifest. When you initialize your map, you need to create a ClusterManager, associate it with your GoogleMap, and add your LatLng positions as ClusterMarkerLocations to the ClusterManager for the utility to know what to cluster. Take a look at the implementation of initMarkers to better understand how this works.
private void initMarkers() {
ClusterManager<ClusterMarkerLocation> clusterManager = new ClusterManager<ClusterMarkerLocation>( this, mGoogleMap );
mGoogleMap.setOnCameraChangeListener(clusterManager);
double lat;
double lng;
Random generator = new Random();
for( int i = 0; i < 1000; i++ ) {
lat = generator.nextDouble() / 3;
lng = generator.nextDouble() / 3;
if( generator.nextBoolean() ) {
lat = -lat;
}
if( generator.nextBoolean() ) {
lng = -lng;
}
clusterManager.addItem( new ClusterMarkerLocation( new LatLng( mCenterLocation.latitude + lat, mCenterLocation.longitude + lng ) ) );
}
}
In this sample, we create 1000 random points to display and add them to the map. The Google Maps Utils library handles everything else for us.
5. Other Utilities
In addition to the last two items, the Google Maps Utils library is full of small useful utilities. If you have many different points that make up a route, you can encode them as a polyline and then add that polyline to your map using PolyUtil. This will display a path between each of the points along the map.
public class PolylineActivity extends BaseMapActivity {
private static final String polyline = "gsqqFxxu_SyRlTys@npAkhAzY{MsVc`AuHwbB}Lil@}[goCqGe|BnUa`A~MkbG?eq@hRq}@_N}vKdB";
@Override
protected void initMapSettings() {
List<LatLng> decodedPath = PolyUtil.decode(polyline);
mGoogleMap.addPolyline(new PolylineOptions().addAll(decodedPath));
}
}
In addition to PolyUtil, Google has added SphericalUtil that can be used to measure distances or figure out geometry along the surface of a sphere. If you want to find the distance between two points on the map, you can call SphericalUtil.computeDistanceBetween( LatLng position1, LatLng position2 ) to return a double of the distance in meters. If you want to find the heading between two points, you can call SphericalUtil.computeHeading( LatLng point1, LatLng point2 ).
In relation to this, another utility method in the SpericalUtil class allows you to find a point at a certain heading and distance away. I recommend browsing the documentation to learn more about the SpericalUtil class.
Conclusion
In this tutorial, you have just scratched the surface of the Google Maps Utils library and all it has to offer. Other functionality it can add to your application includes adding overlays for KML data, creating custom markers, and helper methods for working with GeoJSON.
Luckily, Google has open sourced the entire library, so you can find the library's source code and demo code on GitHub. After having gone through the last three parts of this series, you should now be comfortable enough with Google Maps to add them to your own applications to enrich the user experience and make great apps.
While the standard features of Google Maps are incredibly useful, there will be times that you want to do a little bit more. Luckily, Google has created an open source library containing a set of utilities that Android developers can use to make their applications even better with enhanced maps.
In this tutorial, you will learn how to use this utility library to add heat map visualizations for your data, cluster large numbers of markers for easier viewing, and use various utility methods for working with the spherical nature of the Earth or drawing routes on roads.
The source files for this tutorial can be found on GitHub.
2. Setup
In the first tutorial of this series, I went over how to set up a project using the Google Developer Console and adding an API key to your manifest. For this tutorial, you need to get an API key and set up your project with a manifest as described there.
Next, open build.gradle and add two new dependencies, one for Play Services to use Google Maps and another one for the Google Maps Utils library.
I should note that the Google Maps Utils library is technically still in beta, though it has been available for the last two years. Once you have imported these libraries and synced the project, you need to update the layout file for MainActivity.java so that it uses the custom fragment shown below.
Next, create the UtilsListFragment class that is used above so that it displays a simple list of items representing the various parts of the library you will learn about in this tutorial.
public class UtilsListFragment extends ListFragment {
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
ArrayAdapter<String> adapter = new ArrayAdapter<String>( getActivity(), android.R.layout.simple_list_item_1 );
String[] items = getResources().getStringArray( R.array.list_items );
adapter.addAll( new ArrayList( Arrays.asList(items) ) );
setListAdapter( adapter );
}
@Override
public void onListItemClick(ListView l, View v, int position, long id) {
super.onListItemClick(l, v, position, id);
String item = ( (TextView) v ).getText().toString();
if( getString( R.string.item_clustering ).equalsIgnoreCase( item ) ) {
startActivity( new Intent( getActivity(), ClusterMarkerActivity.class ) );
} else if( getString( R.string.item_heat_map ).equalsIgnoreCase( item ) ) {
startActivity( new Intent( getActivity(), HeatMapActivity.class ) );
} else if( getString( R.string.item_polylines ).equalsIgnoreCase( item ) ) {
startActivity( new Intent( getActivity(), PolylineActivity.class ) );
} else if( getString( R.string.item_spherical_geometry ).equalsIgnoreCase( item ) ) {
startActivity( new Intent( getActivity(), SphericalGeometryActivity.class ) );
}
}
}
Each of the strings are defined and placed into a string-array for uniformity.
Once your list is available, you need to create BaseMapActivity.java, which handles all of the common map related setup for each of the example activities that you will build. This Activity initializes a GoogleMap and zooms the camera in to a specified area. In this case, that area is the city of Denver in Colorado, USA. Everything in this class should look familiar from the last two articles in this series.
Now that you have the initial project built, you can continue on to the next section where you will create a new Activity for each utility that we're going to cover in this tutorial.
3. Heat Maps
Heat maps are an excellent way to visually represent concentrations of data points on a map. The Google Maps Utils library makes it easy to add them to an application. To start, create a new BaseMapActivity named HeatMapActivity and add it to your AndroidManifest.xml file. At the top of that class, declare a HeatmapTileProvider that we'll use to construct the map overlay.
private HeatmapTileProvider mProvider;
In BaseMapActivity, a method named initMapSettings is called that allows you to add your customizations to the map. For this Activity, you need to override that method to get an ArrayList of LatLng objects that is then used to generate the HeatmapTileProvider object.
The provider has various methods that can be used to change the appearance of your heat map, such as the gradient colors, the radius for each point, and the weight of each point. Once your provider is built, you can create the heat map TileOverlay and apply it to your map.
In the above implementation of initMapSettings, generateLocations is a helper method that generates 1000 LatLng positions around the central map location.
private ArrayList<LatLng> generateLocations() {
ArrayList<LatLng> locations = new ArrayList<LatLng>();
double lat;
double lng;
Random generator = new Random();
for( int i = 0; i < 1000; i++ ) {
lat = generator.nextDouble() / 3;
lng = generator.nextDouble() / 3;
if( generator.nextBoolean() ) {
lat = -lat;
}
if( generator.nextBoolean() ) {
lng = -lng;
}
locations.add(new LatLng(mCenterLocation.latitude + lat, mCenterLocation.longitude + lng));
}
return locations;
}
Once you're done implementing initMapSettings and generateLocations, you can run your app and click on the heat map section to see it in action.
4. Clustering Markers
When a map has a lot of data points in a small area, it can get cluttered very quickly as the user zooms out. Not only this, but having too many markers displayed at once can cause some devices to slow down considerably.
In order to help alleviate some of the frustration caused by these issues, you can use the Google Maps Utils library to animate your markers into clusters. The first thing you need to do is create a new model object that implements the ClusterItem interface. This model needs to implement the getPosition method from the ClusterItem interface in order to return a valid LatLng object.
public class ClusterMarkerLocation implements ClusterItem {
private LatLng position;
public ClusterMarkerLocation( LatLng latLng ) {
position = latLng;
}
@Override
public LatLng getPosition() {
return position;
}
public void setPosition( LatLng position ) {
this.position = position;
}
}
With the model created, you can create a new Activity called ClusterMarkerActivity and add it to your manifest. When you initialize your map, you need to create a ClusterManager, associate it with your GoogleMap, and add your LatLng positions as ClusterMarkerLocations to the ClusterManager for the utility to know what to cluster. Take a look at the implementation of initMarkers to better understand how this works.
private void initMarkers() {
ClusterManager<ClusterMarkerLocation> clusterManager = new ClusterManager<ClusterMarkerLocation>( this, mGoogleMap );
mGoogleMap.setOnCameraChangeListener(clusterManager);
double lat;
double lng;
Random generator = new Random();
for( int i = 0; i < 1000; i++ ) {
lat = generator.nextDouble() / 3;
lng = generator.nextDouble() / 3;
if( generator.nextBoolean() ) {
lat = -lat;
}
if( generator.nextBoolean() ) {
lng = -lng;
}
clusterManager.addItem( new ClusterMarkerLocation( new LatLng( mCenterLocation.latitude + lat, mCenterLocation.longitude + lng ) ) );
}
}
In this sample, we create 1000 random points to display and add them to the map. The Google Maps Utils library handles everything else for us.
5. Other Utilities
In addition to the last two items, the Google Maps Utils library is full of small useful utilities. If you have many different points that make up a route, you can encode them as a polyline and then add that polyline to your map using PolyUtil. This will display a path between each of the points along the map.
public class PolylineActivity extends BaseMapActivity {
private static final String polyline = "gsqqFxxu_SyRlTys@npAkhAzY{MsVc`AuHwbB}Lil@}[goCqGe|BnUa`A~MkbG?eq@hRq}@_N}vKdB";
@Override
protected void initMapSettings() {
List<LatLng> decodedPath = PolyUtil.decode(polyline);
mGoogleMap.addPolyline(new PolylineOptions().addAll(decodedPath));
}
}
In addition to PolyUtil, Google has added SphericalUtil that can be used to measure distances or figure out geometry along the surface of a sphere. If you want to find the distance between two points on the map, you can call SphericalUtil.computeDistanceBetween( LatLng position1, LatLng position2 ) to return a double of the distance in meters. If you want to find the heading between two points, you can call SphericalUtil.computeHeading( LatLng point1, LatLng point2 ).
In relation to this, another utility method in the SpericalUtil class allows you to find a point at a certain heading and distance away. I recommend browsing the documentation to learn more about the SpericalUtil class.
Conclusion
In this tutorial, you have just scratched the surface of the Google Maps Utils library and all it has to offer. Other functionality it can add to your application includes adding overlays for KML data, creating custom markers, and helper methods for working with GeoJSON.
Luckily, Google has open sourced the entire library, so you can find the library's source code and demo code on GitHub. After having gone through the last three parts of this series, you should now be comfortable enough with Google Maps to add them to your own applications to enrich the user experience and make great apps.
You don't know what you have until you lose it. We all know what it means, but we often forget that it also applies to our health. In no way is this article intended to lecture you or make you feel guilty about your lifestyle. With this article, I simply want to share a few tips that can help you stay healthy as a programmer.
While programming isn't considered a dangerous occupation with a lot of hazards, a surprising number of developers suffer from health issues. Sitting at a desk won't kill you, but studies have shown that it isn't as healthy as you might think. Luckily, it's surprisingly easy to make some changes with very little effort.
1. Exercise
Even though this is probably the most obvious tip from the list, getting exercise on a regular basis is something that many developers tend to forget—or ignore. There's no need to become the next Ironman, but doing some form of exercise has a number of benefits that will keep you healthy, fit, and focused.
If you go to work by bike or by foot, then you're already ahead of the curve. Exercising regularly has a wide range of benefits that will pay dividends in the long run. It keeps your body fit, improves your health, and it also makes you mentally more resilient. The latter is especially important if your job is stressful or mentally taxing.
Repetitive strain injury (RSI) is a common problem among programmers. Exercise can help prevent RSI or reduce the symptoms. There are many techniques you can use to prevent RSI and exercise is definitely the easiest and the cheapest.
2. Sleep
This tip is especially important if you're still in your teens or twenties. At this age, programmers tend to forget that they are mortal. I'm in my thirties now so I know what it feels like to think that nothing can stop you in your quest to conquer the world. While pulling an all-nighter is something that won't kill you, it will screw up your health if you're not careful. The older you become, the longer it takes to recuperate from sleepless nights. Burning the midnight oil may be necessary on occasions, but respect your health by not making it into a habit.
Sleep is something that your body needs to recover from the previous day and stay fit. The body uses this time of inactivity reboot your brains and repair any damage you incurred during the day. Studies have shown that getting enough sleep significantly boosts your focus during the day, improves your immune system, and even enhances your memory.
Research has also shown that you cannot catch up on sleep. If you only get four hours of sleep on Monday, then it's not going to be very helpful to sleep ten hours on Tuesday. Not getting enough sleep has no advantages. It may be necessary at times, but keep in mind that it will have consequences if you're not careful.
3. Posture
Bad posture is a common problem these days. Even if you're office is equipped with comfortable, ergonomic chairs and desks, it's important to be conscious of your posture. It may seem unimportant when you're in your twenties, but the number of young people suffering from chronic injuries caused by bad posture is alarming—and climbing.
Studies have shown that sitting at a desk for long periods of time isn't healthy for a variety of reasons. Some programmers solve this by using a standing desk. If you haven't heard of a standing desk, it's exactly what it says on the tin. Standing desks don't even need to be expensive.
If a standing desk is something you'd like to try out, then take into account that your body will need some time to adjust to your new setup. It's recommended to gradually increase the time you spend working at a standing desk. Did I mention that you also burn calories simply by standing up while you work?
Some people take it one step further and use a walking or treadmill desk. The main advantage over a standing desk is the number of calories you burn during the day. Note that the speed of the treadmill is very low, about two miles per hour. This is important for safety and to avoid injuries. The idea of a treadmill desk isn't running a marathon while you work. The goal is to keep your metabolic rate slightly above the basal metabolic rate.
4. Caffeine
Like many people, I love, love, love coffee. I know very few programmers that don't use—or misuse—caffeine as a form of fuel during the day. Some choose for coffee or tea while others swear by energy drinks. While there's nothing wrong with caffeine—it even has some health benefits—it can seriously mess with your metabolism and biorhythm if misused.
A common misconception is that caffeine is fuel for your body. Even though most energy drinks are high in calories, caffeine itself doesn't contain any calories. Caffeine is nothing more than a molecule that interacts directly with your central nervous system.
Over the years, researchers have discovered a number of interesting findings about caffeine and how they affect people's performance. For example, people that consume caffeine on a regular basis need it to function properly. This is something most people know, but underestimate. If you have been using caffeine for a few weeks or months, then you need a dose of caffeine simply to stay on par with someone who doesn't depend on caffeine.
If the coffee machine at work is broken, then you know you have a rough day ahead of you, depending on how much caffeine you're body is used to. Removing caffeine from your diet isn't easy for most people, especially if you work in an environment that is stressful. I've done it several times and I can assure you that the first few days can be very, very rough.
Caffeine is addictive and it's all too easy to relapse into drinking coffee when you need that bit of extra energy. But why would you want to cut out caffeine? The most important benefit is that you will sleep much, much better. Another major benefit is that you are no longer dependent on caffeine to function. Do you need extra energy? Go for a run. You'll be surprised by the amount of energy you get from something as simple as a five mile run.
5. Balance
Finding balance in your life isn't always easy. If you have a demanding job with a bunch of responsibilities, then it may be difficult to go offline for long periods of time. No matter what you do for a living, it's important to find a balance between work and life.
Everyone needs some time off to decompress and relax. Making a clear distinction between work and life can really help recharge your batteries. You'll be able to spend time with your family without being distracted by work. Put aside your phone and don't check email when spending time with friends and family. It truly helps recover from the day and prepare for the next.
6. Diet
It goes without saying that a healthy diet is the cornerstone of a healthy lifestyle. That doesn't mean that you can't have a snack or that you need to cut out everything from your diet that's unhealthy. It simply means being conscious of what you eat and when you eat it. That simple act can make a world of difference.
As I mentioned earlier, black coffee doesn't contain calories so it's not a good idea to only drink coffee during the day. Make sure to take the time to eat in the morning and take a short break during lunch.
Do you also enjoy a good night's sleep? Then it's important to not overeat in the evening and not eat too late. If your body is still processing your dinner when you go to bed, then it won't have much time to rest. A healthy lifestyle isn't rocket science.
7. Disconnect
The power and possibilities of modern smartphones and smartwatches are amazing. They allow us to be connected with friends and family wherever we go. It's nice to see that someone favorited your tweet or friended you on Facebook. Right?
I strongly believe that there are times that you need to disconnect, put your computer aside, turn off your phone, and disconnect from the internet. It's a wonderful feeling to leave your smart devices at home and go for a walk in the woods or in the park. Your mind will thank you for it.
Enjoy your surroundings and, more importantly, the feeling that nobody will call or text you. Some of us or so hooked to being connected with the world that they feel anxious if their smartphone isn't within arm's reach. If that's you, then this may be an early warning bell that it's time to step back, even if it's only for ten or fifteen minutes.
Conclusion
When I was young, the internet wasn't a thing yet, smartphones still had to be invented, and smartwatches were science fiction. We are inundated with information from the moment we wake up until we go to bed. It doesn't take a genius to figure out that we need to take a break from time to time. Combine this with exercise and a healthy diet, and you have the ingredients for a successful career as a programmer. What tips do you have for a healthy lifestyle?
You don't know what you have until you lose it. We all know what it means, but we often forget that it also applies to our health. In no way is this article intended to lecture you or make you feel guilty about your lifestyle. With this article, I simply want to share a few tips that can help you stay healthy as a programmer.
While programming isn't considered a dangerous occupation with a lot of hazards, a surprising number of developers suffer from health issues. Sitting at a desk won't kill you, but studies have shown that it isn't as healthy as you might think. Luckily, it's surprisingly easy to make some changes with very little effort.
1. Exercise
Even though this is probably the most obvious tip from the list, getting exercise on a regular basis is something that many developers tend to forget—or ignore. There's no need to become the next Ironman, but doing some form of exercise has a number of benefits that will keep you healthy, fit, and focused.
If you go to work by bike or by foot, then you're already ahead of the curve. Exercising regularly has a wide range of benefits that will pay dividends in the long run. It keeps your body fit, improves your health, and it also makes you mentally more resilient. The latter is especially important if your job is stressful or mentally taxing.
Repetitive strain injury (RSI) is a common problem among programmers. Exercise can help prevent RSI or reduce the symptoms. There are many techniques you can use to prevent RSI and exercise is definitely the easiest and the cheapest.
2. Sleep
This tip is especially important if you're still in your teens or twenties. At this age, programmers tend to forget that they are mortal. I'm in my thirties now so I know what it feels like to think that nothing can stop you in your quest to conquer the world. While pulling an all-nighter is something that won't kill you, it will screw up your health if you're not careful. The older you become, the longer it takes to recuperate from sleepless nights. Burning the midnight oil may be necessary on occasions, but respect your health by not making it into a habit.
Sleep is something that your body needs to recover from the previous day and stay fit. The body uses this time of inactivity reboot your brains and repair any damage you incurred during the day. Studies have shown that getting enough sleep significantly boosts your focus during the day, improves your immune system, and even enhances your memory.
Research has also shown that you cannot catch up on sleep. If you only get four hours of sleep on Monday, then it's not going to be very helpful to sleep ten hours on Tuesday. Not getting enough sleep has no advantages. It may be necessary at times, but keep in mind that it will have consequences if you're not careful.
3. Posture
Bad posture is a common problem these days. Even if you're office is equipped with comfortable, ergonomic chairs and desks, it's important to be conscious of your posture. It may seem unimportant when you're in your twenties, but the number of young people suffering from chronic injuries caused by bad posture is alarming—and climbing.
Studies have shown that sitting at a desk for long periods of time isn't healthy for a variety of reasons. Some programmers solve this by using a standing desk. If you haven't heard of a standing desk, it's exactly what it says on the tin. Standing desks don't even need to be expensive.
If a standing desk is something you'd like to try out, then take into account that your body will need some time to adjust to your new setup. It's recommended to gradually increase the time you spend working at a standing desk. Did I mention that you also burn calories simply by standing up while you work?
Some people take it one step further and use a walking or treadmill desk. The main advantage over a standing desk is the number of calories you burn during the day. Note that the speed of the treadmill is very low, about two miles per hour. This is important for safety and to avoid injuries. The idea of a treadmill desk isn't running a marathon while you work. The goal is to keep your metabolic rate slightly above the basal metabolic rate.
4. Caffeine
Like many people, I love, love, love coffee. I know very few programmers that don't use—or misuse—caffeine as a form of fuel during the day. Some choose for coffee or tea while others swear by energy drinks. While there's nothing wrong with caffeine—it even has some health benefits—it can seriously mess with your metabolism and biorhythm if misused.
A common misconception is that caffeine is fuel for your body. Even though most energy drinks are high in calories, caffeine itself doesn't contain any calories. Caffeine is nothing more than a molecule that interacts directly with your central nervous system.
Over the years, researchers have discovered a number of interesting findings about caffeine and how they affect people's performance. For example, people that consume caffeine on a regular basis need it to function properly. This is something most people know, but underestimate. If you have been using caffeine for a few weeks or months, then you need a dose of caffeine simply to stay on par with someone who doesn't depend on caffeine.
If the coffee machine at work is broken, then you know you have a rough day ahead of you, depending on how much caffeine you're body is used to. Removing caffeine from your diet isn't easy for most people, especially if you work in an environment that is stressful. I've done it several times and I can assure you that the first few days can be very, very rough.
Caffeine is addictive and it's all too easy to relapse into drinking coffee when you need that bit of extra energy. But why would you want to cut out caffeine? The most important benefit is that you will sleep much, much better. Another major benefit is that you are no longer dependent on caffeine to function. Do you need extra energy? Go for a run. You'll be surprised by the amount of energy you get from something as simple as a five mile run.
5. Balance
Finding balance in your life isn't always easy. If you have a demanding job with a bunch of responsibilities, then it may be difficult to go offline for long periods of time. No matter what you do for a living, it's important to find a balance between work and life.
Everyone needs some time off to decompress and relax. Making a clear distinction between work and life can really help recharge your batteries. You'll be able to spend time with your family without being distracted by work. Put aside your phone and don't check email when spending time with friends and family. It truly helps recover from the day and prepare for the next.
6. Diet
It goes without saying that a healthy diet is the cornerstone of a healthy lifestyle. That doesn't mean that you can't have a snack or that you need to cut out everything from your diet that's unhealthy. It simply means being conscious of what you eat and when you eat it. That simple act can make a world of difference.
As I mentioned earlier, black coffee doesn't contain calories so it's not a good idea to only drink coffee during the day. Make sure to take the time to eat in the morning and take a short break during lunch.
Do you also enjoy a good night's sleep? Then it's important to not overeat in the evening and not eat too late. If your body is still processing your dinner when you go to bed, then it won't have much time to rest. A healthy lifestyle isn't rocket science.
7. Disconnect
The power and possibilities of modern smartphones and smartwatches are amazing. They allow us to be connected with friends and family wherever we go. It's nice to see that someone favorited your tweet or friended you on Facebook. Right?
I strongly believe that there are times that you need to disconnect, put your computer aside, turn off your phone, and disconnect from the internet. It's a wonderful feeling to leave your smart devices at home and go for a walk in the woods or in the park. Your mind will thank you for it.
Enjoy your surroundings and, more importantly, the feeling that nobody will call or text you. Some of us or so hooked to being connected with the world that they feel anxious if their smartphone isn't within arm's reach. If that's you, then this may be an early warning bell that it's time to step back, even if it's only for ten or fifteen minutes.
Conclusion
When I was young, the internet wasn't a thing yet, smartphones still had to be invented, and smartwatches were science fiction. We are inundated with information from the moment we wake up until we go to bed. It doesn't take a genius to figure out that we need to take a break from time to time. Combine this with exercise and a healthy diet, and you have the ingredients for a successful career as a programmer. What tips do you have for a healthy lifestyle?
Facebook’s React Native is a powerful framework that allows you to quickly and effortlessly build Android and iOS apps using just JavaScript and JSX. Apps built using React Native make use of native user interface components and are thus indistinguishable from apps built directly using the SDKs of Android and iOS.
Their performance too is not too far behind that of native apps, because almost all the JavaScript code runs in the background on an embedded instance of JavaScriptCore, the same JavaScript engine that powers Apple’s Safari.
In this tutorial, I am going to help you get started with React Native for Android by showing you how to build a simple English-German dictionary app.
Prerequisites
Before you begin, make sure you have the following installed on your computer:
the latest version of the Android SDK and Android Support Library
As of September 2015, React Native is only supported on OS X. However, with the help of a few scripts, React Native v0.11.4 works just fine on Ubuntu 14.04.
1. Installing React Native
React Native is available as a Node.js package and can be quickly installed using npm, Node Package Manager.
npm install -g react-native
To use React Native for developing Android apps, you should set the value of an environment variable called ANDROID_HOME to the absolute path of the directory containing the Android SDK. If you are using Bash shell, you can set the variable using export.
export ANDROID_HOME=/path/to/Android/Sdk
2. Creating a New Project
To create a React Native project, you should use React Native’s command line interface or CLI, which can be accessed using the react-native command. We are creating a dictionary app in this tutorial so let’s call the project Dictionary.
react-native init Dictionary
Once the command completes, you will have a new directory called Dictionary, containing a starter React Native app. Enter the new directory using cd.
cd Dictionary
Before you proceed, I suggest you run the starter app to make sure that your development environment has everything React Native needs. To do so, type in the following command:
react-native run-android
You will now find an app called Dictionary installed on your emulator. Click on its icon to start it. If everything went well, you should see a screen that looks like this:
3. Preparing the Entry Point of Your App
By default, the entry point of a React Native Android app is a JavaScript file called index.android.js. When you created the project using the React Native’s CLI, this file was created automatically. However, it contains code that belongs to the starter app. You can modify and use parts of that code for your app or you can simply delete all of it and start from scratch. For this tutorial, I suggest you do the latter.
Once you have deleted the contents of index.android.js, use require to load a module called react-native. This module contains all the React Native functions and objects you’ll need to create your app.
var React = require('react-native');
4. Creating a React Component
React components are JavaScript objects that are responsible for rendering and automatically updating the user interface of a React Native app. In fact, almost every user interface element of a React Native app is a React component. This means that, to create the user interface of your app, you need to create your own custom React component. To do so, use the createClass function of React. The following code creates a component called Dictionary:
var Dictionary = React.createClass({
});
You can think of this component as the first screen of your app.
Step 1: Defining the Layout
React Native automatically calls the render function every time it needs to draw or update a component. Therefore, you must add this function to your component. Inside the function, you can define the layout of the component using JSX, a JavaScript syntax extension that allows you to easily mix XML tags with JavaScript code.
React Native offers several components you can use to compose the layout. For now, we will be using a React.View as a container, a React.Text to display text, and a React.TextInput to accept user input. Add the following code to the component:
render: function() {
var layout =
<React.View style = { styles.parent } >
<React.Text>
Type something in English:
</React.Text>
<React.TextInput />
<React.Text style = { styles.germanLabel } >
Its German equivalent is:
</React.Text>
<React.Text style = { styles.germanWord } >
</React.Text>
</React.View>
;
return layout;
},
If you are familiar with HTML, you can think of the View as an HTML div, the Text as an HTML span, and the TextInput as an HTML input element.
Step 2: Adding Styles
In the above code snippet, several components have a style attribute. The style attribute is quite similar to the HTML class attribute. However, instead of referring to a CSS class in a stylesheet, it refers to a JSON object in an instance of React.StyleSheet.
To create a React.StyleSheet object for your app, you need to use the React.StyleSheet.create function. As its only argument, it expects a JSON object containing the styles of the individual components. Here are the styles I used for our example app:
var styles = React.StyleSheet.create({
// For the container View
parent: {
padding: 16
},
// For the Text label
germanLabel: {
marginTop: 20,
fontWeight: 'bold'
},
// For the Text meaning
germanWord: {
marginTop: 15,
fontSize: 30,
fontStyle: 'italic'
}
});
Step 3: Registering the Component
To let React Native know that it should render your component when your app starts, you must register it using the React.AppRegistry.registerComponent function. To do so, add the following code at the end of index.android.js:
If you want to, you can now reload your app to see the new layout. To do so, press the menu button of your emulator and click on Reload JS.
4. Controlling the State of the Component
All components have a special member variable called state, which is a JSON object. It’s special, because as soon as the state of a component changes, React Native automatically re-renders the component to reflect the change. This is a very useful feature and by using it correctly you can do away with manually fetching or updating the contents of your app’s user interface elements.
Let’s add two keys, input and output, to the Dictionary component’s state. To do so, you’ll have to use a function called getInitialState. The return value of this function becomes the state of the component.
You can now associate the TextInput with input and the last Text component with output. After doing so, your layout should look like this:
<React.View style = { styles.parent } >
<React.Text>
Type something in English:
</React.Text>
<React.TextInput text = { this.state.input } />
<React.Text style = { styles.germanLabel } >
Its German equivalent is:
</React.Text>
<React.Text style = { styles.germanWord } >
{ this.state.output }
</React.Text>
</React.View>
As you might have guessed, input will contain the English word the user enters while output will contain its German equivalent.
Though changes in the state are automatically pushed to the user interface, the reverse is not true. This means, our component’s state does not change if the user enters something into the TextInput. To update the state manually, you should use the component’s setState method.
To send the value of the TextInput to input, you can add an onChangeText listener to the TextInput and make a call to setState inside it. Using ES6, the TextInput tag will look like this:
At this point, anything the user types into your app’s TextInput is immediately available in input. All that’s left for us to do is map the input to its German equivalent and update output. To do that, you can use a dictionary called Mr. Honey’s Beginner’s Dictionary (German-English) by Winfried Honig. Download the JSON equivalent of the dictionary from GitHub and add it to your project.
To load the dictionary inside index.android.js, use require.
var english_german = require('./english_german.json');
As english_german is nothing more than a global JSON object where the English words are keys and their German equivalents are values, all you have to do now is check if input is available as a key, and, if yes, call setState to assign the associated value to output. The code to do so could look like this:
showMeaning: function() {
// Use the ternary operator to check if the word
// exists in the dictionary.
var meaning = this.state.input in english_german ?
english_german[this.state.input] :
"Not Found";
// Update the state
this.setState({
output: meaning
});
},
You can now assign showMeaning to the onSubmitEditing listener of the TextInput so that it is called only when the user has finished typing.
Your dictionary app is ready. You can reload it and type in an English word to immediately see its German translation.
Conclusion
In this tutorial, you learned how to install React Native and use it to create your first Android app, an English-German dictionary, using just JavaScript and JSX. While doing so, you learned how to compose a custom component, style it, and use its state to control what it shows.
To learn more about React Native, you can go through its documentation.
Facebook’s React Native is a powerful framework that allows you to quickly and effortlessly build Android and iOS apps using just JavaScript and JSX. Apps built using React Native make use of native user interface components and are thus indistinguishable from apps built directly using the SDKs of Android and iOS.
Their performance too is not too far behind that of native apps, because almost all the JavaScript code runs in the background on an embedded instance of JavaScriptCore, the same JavaScript engine that powers Apple’s Safari.
In this tutorial, I am going to help you get started with React Native for Android by showing you how to build a simple English-German dictionary app.
Prerequisites
Before you begin, make sure you have the following installed on your computer:
the latest version of the Android SDK and Android Support Library
As of September 2015, React Native is only supported on OS X. However, with the help of a few scripts, React Native v0.11.4 works just fine on Ubuntu 14.04.
1. Installing React Native
React Native is available as a Node.js package and can be quickly installed using npm, Node Package Manager.
npm install -g react-native-cli
To use React Native for developing Android apps, you should set the value of an environment variable called ANDROID_HOME to the absolute path of the directory containing the Android SDK. If you are using Bash shell, you can set the variable using export.
export ANDROID_HOME=/path/to/Android/Sdk
2. Creating a New Project
To create a React Native project, you should use React Native’s command line interface or CLI, which can be accessed using the react-native command. We are creating a dictionary app in this tutorial so let’s call the project Dictionary.
react-native init Dictionary
Once the command completes, you will have a new directory called Dictionary, containing a starter React Native app. Enter the new directory using cd.
cd Dictionary
Before you proceed, I suggest you run the starter app to make sure that your development environment has everything React Native needs. To do so, type in the following command:
react-native run-android
You will now find an app called Dictionary installed on your emulator. Click on its icon to start it. If everything went well, you should see a screen that looks like this:
3. Preparing the Entry Point of Your App
By default, the entry point of a React Native Android app is a JavaScript file called index.android.js. When you created the project using the React Native’s CLI, this file was created automatically. However, it contains code that belongs to the starter app. You can modify and use parts of that code for your app or you can simply delete all of it and start from scratch. For this tutorial, I suggest you do the latter.
Once you have deleted the contents of index.android.js, use require to load a module called react-native. This module contains all the React Native functions and objects you’ll need to create your app.
var React = require('react-native');
4. Creating a React Component
React components are JavaScript objects that are responsible for rendering and automatically updating the user interface of a React Native app. In fact, almost every user interface element of a React Native app is a React component. This means that, to create the user interface of your app, you need to create your own custom React component. To do so, use the createClass function of React. The following code creates a component called Dictionary:
var Dictionary = React.createClass({
});
You can think of this component as the first screen of your app.
Step 1: Defining the Layout
React Native automatically calls the render function every time it needs to draw or update a component. Therefore, you must add this function to your component. Inside the function, you can define the layout of the component using JSX, a JavaScript syntax extension that allows you to easily mix XML tags with JavaScript code.
React Native offers several components you can use to compose the layout. For now, we will be using a React.View as a container, a React.Text to display text, and a React.TextInput to accept user input. Add the following code to the component:
render: function() {
var layout =
<React.View style = { styles.parent } >
<React.Text>
Type something in English:
</React.Text>
<React.TextInput />
<React.Text style = { styles.germanLabel } >
Its German equivalent is:
</React.Text>
<React.Text style = { styles.germanWord } >
</React.Text>
</React.View>
;
return layout;
},
If you are familiar with HTML, you can think of the View as an HTML div, the Text as an HTML span, and the TextInput as an HTML input element.
Step 2: Adding Styles
In the above code snippet, several components have a style attribute. The style attribute is quite similar to the HTML class attribute. However, instead of referring to a CSS class in a stylesheet, it refers to a JSON object in an instance of React.StyleSheet.
To create a React.StyleSheet object for your app, you need to use the React.StyleSheet.create function. As its only argument, it expects a JSON object containing the styles of the individual components. Here are the styles I used for our example app:
var styles = React.StyleSheet.create({
// For the container View
parent: {
padding: 16
},
// For the Text label
germanLabel: {
marginTop: 20,
fontWeight: 'bold'
},
// For the Text meaning
germanWord: {
marginTop: 15,
fontSize: 30,
fontStyle: 'italic'
}
});
Step 3: Registering the Component
To let React Native know that it should render your component when your app starts, you must register it using the React.AppRegistry.registerComponent function. To do so, add the following code at the end of index.android.js:
If you want to, you can now reload your app to see the new layout. To do so, press the menu button of your emulator and click on Reload JS.
4. Controlling the State of the Component
All components have a special member variable called state, which is a JSON object. It’s special, because as soon as the state of a component changes, React Native automatically re-renders the component to reflect the change. This is a very useful feature and by using it correctly you can do away with manually fetching or updating the contents of your app’s user interface elements.
Let’s add two keys, input and output, to the Dictionary component’s state. To do so, you’ll have to use a function called getInitialState. The return value of this function becomes the state of the component.
You can now associate the TextInput with input and the last Text component with output. After doing so, your layout should look like this:
<React.View style = { styles.parent } >
<React.Text>
Type something in English:
</React.Text>
<React.TextInput text = { this.state.input } />
<React.Text style = { styles.germanLabel } >
Its German equivalent is:
</React.Text>
<React.Text style = { styles.germanWord } >
{ this.state.output }
</React.Text>
</React.View>
As you might have guessed, input will contain the English word the user enters while output will contain its German equivalent.
Though changes in the state are automatically pushed to the user interface, the reverse is not true. This means, our component’s state does not change if the user enters something into the TextInput. To update the state manually, you should use the component’s setState method.
To send the value of the TextInput to input, you can add an onChangeText listener to the TextInput and make a call to setState inside it. Using ES6, the TextInput tag will look like this:
At this point, anything the user types into your app’s TextInput is immediately available in input. All that’s left for us to do is map the input to its German equivalent and update output. To do that, you can use a dictionary called Mr. Honey’s Beginner’s Dictionary (German-English) by Winfried Honig. Download the JSON equivalent of the dictionary from GitHub and add it to your project.
To load the dictionary inside index.android.js, use require.
var english_german = require('./english_german.json');
As english_german is nothing more than a global JSON object where the English words are keys and their German equivalents are values, all you have to do now is check if input is available as a key, and, if yes, call setState to assign the associated value to output. The code to do so could look like this:
showMeaning: function() {
// Use the ternary operator to check if the word
// exists in the dictionary.
var meaning = this.state.input in english_german ?
english_german[this.state.input] :
"Not Found";
// Update the state
this.setState({
output: meaning
});
},
You can now assign showMeaning to the onSubmitEditing listener of the TextInput so that it is called only when the user has finished typing.
Your dictionary app is ready. You can reload it and type in an English word to immediately see its German translation.
Conclusion
In this tutorial, you learned how to install React Native and use it to create your first Android app, an English-German dictionary, using just JavaScript and JSX. While doing so, you learned how to compose a custom component, style it, and use its state to control what it shows.
To learn more about React Native, you can go through its documentation.
I have yet to meet a programmer who enjoys error handling. Whether you like it or not, a robust application needs to handle errors in such a way that the application remains functional and informs the user when necessary. Like testing, it's part of the job.
1. Objective-C
In Objective-C, it was all too easy to ignore error handling. Take a look at the following example in which I ignore any errors that may result from executing a fetch request.
// Execute Fetch Request
NSArray *results = [managedObjectContext executeFetchRequest:fetchRequest error:nil];
if (results) {
// Process Results
...
}
The above example shows that error handling in Objective-C is something the developer needs to opt into. If you'd like to know what went wrong if something goes haywire, then you tell this to the API by handing it an NSError pointer. The example below illustrates how this works in Objective-C.
While earlier versions of Swift didn't come with a good solution for error handling, Swift 2 has given us what we've asked for and it was well worth the wait. In Swift 2, error handling is enabled by default. In contrast to Objective-C, developers need to explicitly tell the compiler if they choose to ignore error handling. While this won't force developers to embrace error handling, it makes the decision explicit.
If we were to translate the above example to Swift, we would end up with the same number of lines. While the amount of code you need to write remains unchanged, the syntax makes it very explicit what you are trying to do.
do {
// Execute Fetch Request
let results = try managedObjectContext.executeFetchRequest(fetchRequest)
// Process Results
...
} catch {
let fetchError = error as NSError
// Handle Error
...
}
At the end of this tutorial, you will understand the above code snippet and know everything you need to know to handle errors in Swift 2.
2. Throwing Functions
throws
The foundation of error handling in Swift is the ability for functions and methods to throw errors. In Swift parlance, a function that can throw errors is referred to as a throwing function. The function definition of a throwing function is very clear this ability as illustrated in the following example.
The throws keyword indicates that init(contentsOfURL:options:) can throw an error if something goes wrong. If you invoke a throwing function, the compiler will throw an error, speaking of irony. Why is that?
let data = NSData(contentsOfURL: URL, options: [])
try
The creators of Swift have put a lot of attention into making the language expressive and error handling is exactly that, expressive. If you try to invoke a function that can throw an error, the function call needs to be preceded by the try keyword. The try keyword isn't magical. All it does, is make the developer aware of the throwing ability of the function.
Wait a second. The compiler continues to complain even though we've preceded the function call with the try keyword. What are we missing?
The compiler sees that we're using the try keyword, but it correctly points out that we have no way in place to catch any errors that may be thrown. To catch errors, we use Swift's brand new do-catch statement.
do-catch
If a throwing function throws an error, the error will automatically propagate out of the current scope until it is caught. This is similar to exceptions in Objective-C and other languages. The idea is that an error must be caught and handled at some point. More specifically, an error propagates until it is caught by a catch clause of a do-catch statement.
In the updated example below, we invoke the init(contentsOfURL:options:) methods in a do-catch statement. In the do clause, we invoke the function, using the try keyword. In the catch clause, we handle any errors that were thrown while executing the function. This is a pattern that's very common in Swift 2.
do {
let data = try NSData(contentsOfURL: URL, options: [])
} catch {
print("\(error)")
}
In the catch clause, you have access to the error that was thrown through a local constant error. The catch clause is much more powerful than what is shown in the above example. We'll take a look at a more interesting example a bit later.
3. Throwing Errors
In Objective-C, you typically use NSError, defined in the Foundation framework, for error handling. Because the language doesn't define how error handling should be implemented, you are free to define your own class or structure for creating errors.
This isn't true in Swift. While any class or structure can act as an error, they need to conform to the ErrorType protocol. The protocol, however, is pretty easy to implement since it doesn't declare any methods or properties.
Enumerations are powerful in Swift and they are a good fit for error handling. Enums are great for the pattern matching functionality of the catch clause of the do-catch statement. It's easier to illustrate this with an example. Let's start by defining an enum that conforms to the ErrorType protocol.
enum PrinterError: ErrorType {
case NoToner
case NoPaper
case NotResponding
case MaintenanceRequired
}
We define an enum, PrinterError, that conforms to the ErrorType protocol. The enum has four member variables. We can now define a function for printing a document. We pass the function an NSData instance and tell the compiler that it can throw errors by using the throws keyword.
To print a document, we invoke printDocumentWithData(_:). As we saw earlier, we need to use the try keyword and wrap the function call in a do-catch statement. In the example below, we handle any errors in the catch clause.
We can improve the example by inspecting the error that is thrown. A catch clause is similar to a switch statement in that it allows for pattern matching. Take a look at the updated example below.
do {
try printDocumentWithData(data)
} catch PrinterError.NoToner {
// Notify User
} catch PrinterError.NoPaper {
// Notify User
} catch PrinterError.NotResponding {
// Schedule New Attempt
}
That looks much better. But there is one problem. The compiler is notifying us that we are not handling every possible error the printDocumentWithData(_:) method might throw.
The compiler is right of course. A catch clause is similar to a switch statement in that it needs to be exhaustive, it needs to handle every possible case. We can add another catch clause for PrinterError.MaintenanceRequired or we can add a catch-all clause at the end. By adding a default catch clause, the compiler error should disappear.
do {
try printDocumentWithData(data)
} catch PrinterError.NoToner {
// Notify User
} catch PrinterError.NoPaper {
// Notify User
} catch PrinterError.NotResponding {
// Schedule New Attempt
} catch {
// Handle Any Other Errors
}
4. Cleaning Up After Yourself
The more I learn about the Swift language, the more I come to appreciate it. The defer statement is another wonderful addition to the language. The name sums it up pretty nicely, but let me show you an example to explain the concept.
The example is a bit contrived, but it illustrates the use of defer. The block of the defer statement is executed before execution exits the scope in which the defer statement appears. You may want to read that sentence again.
It means that the powerOffPrinter() function is invoked even if the printData(_:) function throws an error. I'm sure you can see that it works really well with Swift's error handling.
The position of the defer statement within the if statement is not important. The following updated example is identical as far as the compiler is concerned.
You can have multiple defer statements as long as you remember that they are executed in reverse order in which they appear.
5. Propagation
It is possible that you don't want to handle an error, but instead let it bubble up to an object that is capable of or responsible for handling the error. That is fine. Not every try expression needs to be wrapped in a do-catch statement. There is one condition though, the function that calls the throwing function needs to be a throwing function itself. Take a look at the next two examples.
func printTestDocument() {
// Load Document Data
let dataForDocument = NSData(contentsOfFile: "pathtodocument")
if let data = dataForDocument {
try printDocumentWithData(data)
}
}
func printTestDocument() throws {
// Load Document Data
let dataForDocument = NSData(contentsOfFile: "pathtodocument")
if let data = dataForDocument {
try printDocumentWithData(data)
}
}
The first example results in a compiler error, because we don't handle the errors that printDocumentWithData(_:) may throw. We resolve this issue in the second example by marking the printTestDocument() function as throwing. If printDocumentWithData(_:) throws an error, then the error is passed to the caller of the printTestDocument() function.
6. Bypassing Error Handling
At the beginning of this article, I wrote that Swift wants you to embrace error handling by making it easy and intuitive. There may be times that you don't want or need to handle the errors that are thrown. You decide to stop the propagation of errors. That is possible by using a variant of the try keyword, try!.
In Swift, an exclamation mark always serves as a warning. An exclamation mark basically tells the developer that Swift is no longer responsible if something goes wrong. And that is what the try! keyword tells you. If you precede a throwing function call with the try! keyword, also known as a forced-try expression, error propagation is disabled.
While this may sound fantastic to some of you, I must warn you that this isn't what you think it is. If a throwing function throws an error and you've disabled error propagation, then you'll run into a runtime error. This mostly means that your application will crash. You have been warned.
7. Objective-C APIs
The Swift team at Apple has put a lot of effort into making error handling as transparent as possible for Objective-C APIs. For example, have you noticed that the first Swift example of this tutorial is an Objective-C API. Despite the API being written in Objective-C, the method doesn't accept an NSError pointer as its last argument. To the compiler, it's a regular throwing method. This is what the method definition looks like in Objective-C.
And this is what the method definition looks like in Swift.
public func executeFetchRequest(request: NSFetchRequest) throws -> [AnyObject]
The errors that executeFetchRequest(request: NSFetchRequest) throws are NSError instances. This is only possible, because NSError conforms to the ErrorType protocol as we discussed earlier. Take a look at the Conforms To column below.
Learn More in Our Swift 2 Programming Course
Swift 2 has a lot of new features and possibilities. Take our course on Swift 2 development to get you up to speed. Error handling is just a small piece of the possibilities of Swift 2.
Conclusion
The takeaway message of this article is that error handling rocks in Swift. If you've paid attention, then you've also picked up that you will need to adopt error handling if you choose to develop in Swift. Using the try! keyword won't get you out of error handling. It's the opposite, using it too often will get you into trouble. Give it a try and I'm sure you're going to love it once you've given it some time.
I have yet to meet a programmer who enjoys error handling. Whether you like it or not, a robust application needs to handle errors in such a way that the application remains functional and informs the user when necessary. Like testing, it's part of the job.
1. Objective-C
In Objective-C, it was all too easy to ignore error handling. Take a look at the following example in which I ignore any errors that may result from executing a fetch request.
// Execute Fetch Request
NSArray *results = [managedObjectContext executeFetchRequest:fetchRequest error:nil];
if (results) {
// Process Results
...
}
The above example shows that error handling in Objective-C is something the developer needs to opt into. If you'd like to know what went wrong if something goes haywire, then you tell this to the API by handing it an NSError pointer. The example below illustrates how this works in Objective-C.
While earlier versions of Swift didn't come with a good solution for error handling, Swift 2 has given us what we've asked for and it was well worth the wait. In Swift 2, error handling is enabled by default. In contrast to Objective-C, developers need to explicitly tell the compiler if they choose to ignore error handling. While this won't force developers to embrace error handling, it makes the decision explicit.
If we were to translate the above example to Swift, we would end up with the same number of lines. While the amount of code you need to write remains unchanged, the syntax makes it very explicit what you are trying to do.
do {
// Execute Fetch Request
let results = try managedObjectContext.executeFetchRequest(fetchRequest)
// Process Results
...
} catch {
let fetchError = error as NSError
// Handle Error
...
}
At the end of this tutorial, you will understand the above code snippet and know everything you need to know to handle errors in Swift 2.
2. Throwing Functions
throws
The foundation of error handling in Swift is the ability for functions and methods to throw errors. In Swift parlance, a function that can throw errors is referred to as a throwing function. The function definition of a throwing function is very clear this ability as illustrated in the following example.
The throws keyword indicates that init(contentsOfURL:options:) can throw an error if something goes wrong. If you invoke a throwing function, the compiler will throw an error, speaking of irony. Why is that?
let data = NSData(contentsOfURL: URL, options: [])
try
The creators of Swift have put a lot of attention into making the language expressive and error handling is exactly that, expressive. If you try to invoke a function that can throw an error, the function call needs to be preceded by the try keyword. The try keyword isn't magical. All it does, is make the developer aware of the throwing ability of the function.
Wait a second. The compiler continues to complain even though we've preceded the function call with the try keyword. What are we missing?
The compiler sees that we're using the try keyword, but it correctly points out that we have no way in place to catch any errors that may be thrown. To catch errors, we use Swift's brand new do-catch statement.
do-catch
If a throwing function throws an error, the error will automatically propagate out of the current scope until it is caught. This is similar to exceptions in Objective-C and other languages. The idea is that an error must be caught and handled at some point. More specifically, an error propagates until it is caught by a catch clause of a do-catch statement.
In the updated example below, we invoke the init(contentsOfURL:options:) methods in a do-catch statement. In the do clause, we invoke the function, using the try keyword. In the catch clause, we handle any errors that were thrown while executing the function. This is a pattern that's very common in Swift 2.
do {
let data = try NSData(contentsOfURL: URL, options: [])
} catch {
print("\(error)")
}
In the catch clause, you have access to the error that was thrown through a local constant error. The catch clause is much more powerful than what is shown in the above example. We'll take a look at a more interesting example a bit later.
3. Throwing Errors
In Objective-C, you typically use NSError, defined in the Foundation framework, for error handling. Because the language doesn't define how error handling should be implemented, you are free to define your own class or structure for creating errors.
This isn't true in Swift. While any class or structure can act as an error, they need to conform to the ErrorType protocol. The protocol, however, is pretty easy to implement since it doesn't declare any methods or properties.
Enumerations are powerful in Swift and they are a good fit for error handling. Enums are great for the pattern matching functionality of the catch clause of the do-catch statement. It's easier to illustrate this with an example. Let's start by defining an enum that conforms to the ErrorType protocol.
enum PrinterError: ErrorType {
case NoToner
case NoPaper
case NotResponding
case MaintenanceRequired
}
We define an enum, PrinterError, that conforms to the ErrorType protocol. The enum has four member variables. We can now define a function for printing a document. We pass the function an NSData instance and tell the compiler that it can throw errors by using the throws keyword.
To print a document, we invoke printDocumentWithData(_:). As we saw earlier, we need to use the try keyword and wrap the function call in a do-catch statement. In the example below, we handle any errors in the catch clause.
We can improve the example by inspecting the error that is thrown. A catch clause is similar to a switch statement in that it allows for pattern matching. Take a look at the updated example below.
do {
try printDocumentWithData(data)
} catch PrinterError.NoToner {
// Notify User
} catch PrinterError.NoPaper {
// Notify User
} catch PrinterError.NotResponding {
// Schedule New Attempt
}
That looks much better. But there is one problem. The compiler is notifying us that we are not handling every possible error the printDocumentWithData(_:) method might throw.
The compiler is right of course. A catch clause is similar to a switch statement in that it needs to be exhaustive, it needs to handle every possible case. We can add another catch clause for PrinterError.MaintenanceRequired or we can add a catch-all clause at the end. By adding a default catch clause, the compiler error should disappear.
do {
try printDocumentWithData(data)
} catch PrinterError.NoToner {
// Notify User
} catch PrinterError.NoPaper {
// Notify User
} catch PrinterError.NotResponding {
// Schedule New Attempt
} catch {
// Handle Any Other Errors
}
4. Cleaning Up After Yourself
The more I learn about the Swift language, the more I come to appreciate it. The defer statement is another wonderful addition to the language. The name sums it up pretty nicely, but let me show you an example to explain the concept.
The example is a bit contrived, but it illustrates the use of defer. The block of the defer statement is executed before execution exits the scope in which the defer statement appears. You may want to read that sentence again.
It means that the powerOffPrinter() function is invoked even if the printData(_:) function throws an error. I'm sure you can see that it works really well with Swift's error handling.
The position of the defer statement within the if statement is not important. The following updated example is identical as far as the compiler is concerned.
You can have multiple defer statements as long as you remember that they are executed in reverse order in which they appear.
5. Propagation
It is possible that you don't want to handle an error, but instead let it bubble up to an object that is capable of or responsible for handling the error. That is fine. Not every try expression needs to be wrapped in a do-catch statement. There is one condition though, the function that calls the throwing function needs to be a throwing function itself. Take a look at the next two examples.
func printTestDocument() {
// Load Document Data
let dataForDocument = NSData(contentsOfFile: "pathtodocument")
if let data = dataForDocument {
try printDocumentWithData(data)
}
}
func printTestDocument() throws {
// Load Document Data
let dataForDocument = NSData(contentsOfFile: "pathtodocument")
if let data = dataForDocument {
try printDocumentWithData(data)
}
}
The first example results in a compiler error, because we don't handle the errors that printDocumentWithData(_:) may throw. We resolve this issue in the second example by marking the printTestDocument() function as throwing. If printDocumentWithData(_:) throws an error, then the error is passed to the caller of the printTestDocument() function.
6. Bypassing Error Handling
At the beginning of this article, I wrote that Swift wants you to embrace error handling by making it easy and intuitive. There may be times that you don't want or need to handle the errors that are thrown. You decide to stop the propagation of errors. That is possible by using a variant of the try keyword, try!.
In Swift, an exclamation mark always serves as a warning. An exclamation mark basically tells the developer that Swift is no longer responsible if something goes wrong. And that is what the try! keyword tells you. If you precede a throwing function call with the try! keyword, also known as a forced-try expression, error propagation is disabled.
While this may sound fantastic to some of you, I must warn you that this isn't what you think it is. If a throwing function throws an error and you've disabled error propagation, then you'll run into a runtime error. This mostly means that your application will crash. You have been warned.
7. Objective-C APIs
The Swift team at Apple has put a lot of effort into making error handling as transparent as possible for Objective-C APIs. For example, have you noticed that the first Swift example of this tutorial is an Objective-C API. Despite the API being written in Objective-C, the method doesn't accept an NSError pointer as its last argument. To the compiler, it's a regular throwing method. This is what the method definition looks like in Objective-C.
And this is what the method definition looks like in Swift.
public func executeFetchRequest(request: NSFetchRequest) throws -> [AnyObject]
The errors that executeFetchRequest(request: NSFetchRequest) throws are NSError instances. This is only possible, because NSError conforms to the ErrorType protocol as we discussed earlier. Take a look at the Conforms To column below.
Learn More in Our Swift 2 Programming Course
Swift 2 has a lot of new features and possibilities. Take our course on Swift 2 development to get you up to speed. Error handling is just a small piece of the possibilities of Swift 2.
Conclusion
The takeaway message of this article is that error handling rocks in Swift. If you've paid attention, then you've also picked up that you will need to adopt error handling if you choose to develop in Swift. Using the try! keyword won't get you out of error handling. It's the opposite, using it too often will get you into trouble. Give it a try and I'm sure you're going to love it once you've given it some time.
With the release of Swift 2, Apple added a range of new features and capabilities to the Swift programming language. One of the most important, however, was an overhaul of protocols. The improved functionality available with Swift protocols allows for a new type of programming, protocol-oriented programming. This is in contrast to the more common object-orientated programming style many of us are used to.
In this tutorial, I am going to show you the basics of protocol-oriented programming in Swift and how it differs from object-orientated programming.
Prerequisites
This tutorial requires that you are running Xcode 7 or higher, which includes support for version 2 of the Swift programming language.
1. Protocol Basics
If you aren't already familiar with protocols, they are a way of extending the functionality of an existing class or structure. A protocol can be thought of as a blueprint or interface that defines a set of properties and methods. A class or structure that conforms to a protocol is required to fill out these properties and methods with values and implementations respectively.
It should also be noted that any of these properties and methods can be designated as optional, which means that conforming types aren't required to implement them. A protocol definition and class conformance in Swift could look like this:
protocol Welcome {
var welcomeMessage: String { get set }
optional func welcome()
}
class Welcomer: Welcome {
var welcomeMessage = "Hello World!"
func welcome() {
print(welcomeMessage)
}
}
2. An Example
To begin, open Xcode and create a new playground for either iOS or OS X. Once Xcode has created the playground, replace its contents with the following:
protocol Drivable {
var topSpeed: Int { get }
}
protocol Reversible {
var reverseSpeed: Int { get }
}
protocol Transport {
var seatCount: Int { get }
}
We define three protocols, each containing a property. Next, we create a structure that conforms to these three protocols. Add the following code to the playground:
struct Car: Drivable, Reversible, Transport {
var topSpeed = 150
var reverseSpeed = 20
var seatCount = 5
}
You may have noticed that instead of creating a class that conforms to these protocols, we created a structure. We do this to avoid one of the typical problems inherent to object-oriented programming, object references.
Imagine, for example, that you have two objects, A and B. A creates some data on its own and keeps a reference to that data. A then shares this data with B by reference, which means that both objects have a reference to the same object. Without A knowing, B changes the data in some way.
While this may not seem like a big problem, it can be when A did not expect the data to be altered. Object A may find data it doesn't know how to handle or deal with. This is a common risk of object references.
In Swift, structures are passed by value rather than by reference. This means that, in the above example, if the data created by A was packaged as a structure instead of an object and shared with B, the data would be copied instead of shared by reference. This would then result in both A and B having their own unique copy of the same piece of data. A change made by B wouldn't affect the copy managed by A.
Breaking up the Drivable, Reversible, and Transport components into individual protocols also allows for greater level of customization than traditional class inheritance. If you've read my first tutorial about the new GameplayKit framework in iOS 9, then this protocol-oriented model is very similar to the Entities and Components structure used in the GameplayKit framework.
By adopting this approach, custom data types can inherit functionality from multiple sources rather than a single superclass. Keeping in mind what we've got so far, we could create the following classes:
a class with components of the Drivable and Reversible protocols
a class with components of the Drivable and Transportable protocols
a class with components of the Reversible and Transportable protocols
With object-oriented programming, the most logical way to create these three classes would be to inherit from one superclass that contains the components of all three protocols. This approach, however, results in the superclass being more complicated than it needs to be and each of the subclasses inheriting more functionality than it needs.
3. Protocol Extensions
Everything I showed you so far has been possible in Swift ever since its release in 2014. These same protocol-oriented concepts could have even been applied to Objective-C protocols. Due to the limitations that used to exist on protocols, however, true protocol-oriented programming wasn't possible until a number of key features were added to the Swift language in version 2. One of the most important of these features is protocol extensions, including conditional extensions.
Firstly, let's extend the Drivable protocol and add a function to determine whether or not a particular Drivable is faster than another. Add the following to your playground:
extension Drivable {
func isFasterThan(item: Drivable) -> Bool {
return self.topSpeed > item.topSpeed
}
}
let sedan = Car()
let sportsCar = Car(topSpeed: 250, reverseSpeed: 25, seatCount: 2)
sedan.isFasterThan(sportsCar)
You can see that, when the playground's code is executed, it outputs a value of falseas your sedan car has a default topSpeed of 150, which is less than the sportsCar.
You may have noticed that we provided a function definition rather than a function declaration. This seems strange, because protocols are only supposed to contain declarations. Right? This is another very important feature of protocol extensions in Swift 2, default behaviors. By extending a protocol, you can provide a default implementation for functions and computed properties so that classes conforming to the protocol don't have to.
Next, we are going to define another Drivable protocol extension, but this time we'll only define it for value types that also conform to the Reversible protocol. This extension will then contain a function that determines which object has the better speed range. We can achieve this with the following code:
The Self keyword, spelled with a capital "S", is used to represent the class or structure that conforms to the protocol. In the above example, the Self keyword represents the Car structure.
After running the playground's code, Xcode will output the results in the sidebar on the right as shown below. Note that sportsCar has a larger range than sedan.
4. Working With the Swift Standard Library
While defining and extending your own protocols can be very useful, the true power of protocol extensions shows when working with the Swift standard library. This allows you to add properties or functions to existing protocols, such as CollectionType (used for things like arrays and dictionaries) and Equatable (being able to determine when two objects are equal or not). With conditional protocol extensions, you can also provide very specific functionality for a specific type of object that conforms to a protocol.
In our playground, we are going to extend the CollectionType protocol and create two methods, one to get the average top speed of cars in a Car array and another for the average reverse speed. Add the following code to your playground:
extension CollectionType where Self.Generator.Element: Drivable {
func averageTopSpeed() -> Int {
var total = 0, count = 0
for item in self {
total += item.topSpeed
count++
}
return (total/count)
}
}
func averageReverseSpeed<T: CollectionType where T.Generator.Element: Reversible>(items: T) -> Int {
var total = 0, count = 0
for item in items {
total += item.reverseSpeed
count++
}
return (total/count)
}
let cars = [Car(), sedan, sportsCar]
cars.averageTopSpeed()
averageReverseSpeed(cars)
The protocol extension that defines the averageTopSpeed method takes advantage of conditional extensions in Swift 2. In contrast, the averageReverseSpeed function we define directly below it is another way to achieve a similar result utilizing Swift generics. I personally prefer the cleaner looking CollectionType protocol extension, but it's up to personal preference.
In both functions, we iterate through the array, add up the total amount, and then return the average value. Note that we manually keep a count of the items in the array, because when working with CollectionType rather than regular Array type items, the count property is a Self.Index.Distance type value rather than an Int.
Once your playground has executed all of this code, you should see an output average top speed of 183 and an average reverse speed of 21.
5. Importance of Classes
Despite protocol-oriented programming being a very efficient and scalable way to manage your code in Swift, there are still perfectly valid reasons for using classes when developing in Swift:
Backwards Compatibility
The majority of the iOS, watchOS, and tvOS SDKs are written in Objective-C, using an object-oriented approach. If you need to interact with any of the APIs included in these SDKs, you are forced to use the classes defined in these SDKs.
Referencing an External File or Item
The Swift compiler optimizes the lifetime of objects based on when and where they are used. The stability of class-based objects means that your references to other files and items will remain consistent.
Object References
Object references are exactly what you need at times, for example, if you're feeding information into a particular object, such as a graphics renderer. Using classes with implicit sharing is important in situations like this, because you need to be sure that the renderer you are sending the data to is still the same renderer as before.
Conclusion
Hopefully by the end of this tutorial you can see the potential of protocol-oriented programming in Swift and how it can be used to streamline and extend your code. While this new methodology of coding will not entirely replace object-oriented programming, it does bring a number of very useful, new possibilities.
From default behaviors to protocol extensions, protocol-oriented programming in Swift is going to be adopted by many future APIs and will completely change the way in which we think about software development.
As always, be sure to leave your comments and feedback in the comments below.
With the iPhone 6s and 6s Plus, Apple introduced an entirely new way of interacting with our devices called 3D Touch. 3D Touch works by detecting the amount of pressure that you are applying to your phone's screen in order to perform different actions. In this tutorial, I am going to show you how to take advantage of 3D Touch so that you can utilize this new technology in your own iOS 9 apps.
Prerequisites
This tutorial requires that you are running Xcode 7.1 or later. At the time of writing, the iOS Simulator doesn't support 3D Touch yet, which means that any testing needs to be done on a physical device, iPhone 6s or iPhone 6s Plus. If you'd like to follow along, then start by downloading the starter project from GitHub.
1. Peek and Pop in Storyboards
In this first section, I am going to show you how to implement Peek and Pop functionality in your app using storyboards—and a bit of code. If you don't know what Peek and Pop is, it's basically a way of pressing on a user interface element with a bit more force to get a "Peek" at it.
From such a preview, you can then either lift your finger to dismiss it or push a bit harder again to "Pop" it into full screen. "Peekable" items can be any view controller, including things such as emails, messages, and web pages as shown in the screenshot below.
Open the starter project in Xcode and navigate to Main.storyboard. Zoom the storyboard out by pinching on your trackpad or by pressing Command + -. Select the segue shown in the next screenshot.
With this segue selected, open the Attributes Inspector and look for a new section named Peek and Pop. Enable the checkbox and configure the behavior as shown below.
In this menu, you can assign custom identifiers to both the Peek (Preview) and Pop (Commit) segues. The commit segue also has options to configure everything that you can for a regular segue in storyboards such as Class, Module, and Kind.
Build and run your app on your iPhone and press the + button in the top right corner to create a new item.
Press firmly on the item and you will see that we get a preview of the detail view controller for that item.
You will see that our detail view does not yet show the right data for the item we are previewing. This is because we have not yet configured the view for the custom preview segue we defined in the storyboard. Back in your project, open MasterViewController.swift and replace the prepareForSegue(_:sender:) method with the following implementation:
override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
if segue.identifier == "showDetail" {
if let indexPath = self.tableView.indexPathForSelectedRow {
let object = objects[indexPath.row] as! NSDate
let controller = (segue.destinationViewController as! UINavigationController).topViewController as! DetailViewController
controller.detailItem = object
controller.navigationItem.leftBarButtonItem = self.splitViewController?.displayModeButtonItem()
controller.navigationItem.leftItemsSupplementBackButton = true
}
} else if let cell = sender as? UITableViewCell where segue.identifier == "showDetailPeek" {
let controller = (segue.destinationViewController as! UINavigationController).topViewController as! DetailViewController
controller.detailItem = cell.textLabel?.text
controller.navigationItem.leftBarButtonItem = self.splitViewController?.displayModeButtonItem()
controller.navigationItem.leftItemsSupplementBackButton = true
}
}
The first if statement remains unchanged. If this code isn't executed, we then check to see if the sender is a UITableViewCell and the segue identifier is equal to "showDetailPeek". If both of these conditions are met, we configure the detailItem to the text of the cell.
Build and run your app again. This time, when peeking an item, you should get a properly configured preview as shown below.
One important thing to note is that these storyboard configurations for Peek and Pop will only work on devices running iOS 9.1 or later. To support devices running iOS 9.0, you will need to configure your Peek and Pop functionality in code as shown in the next section.
2. Peek and Pop in Code
While a bit more complicated than the storyboard setup, programmatically implementing Peek and Pop also allows you to add extra actions to your previews when the user swipes up. Take a look at the following screenshot to better understand what I mean.
Peek and pop is handled in code by the UIViewControllerPreviewingDelegate protocol. In your project, create a new iOS > Source > Swift File and name it MasterPreviewing.
We make the MasterViewController class conform to the UIViewControllerPreviewingDelegate protocol.
In the previewingContext(_:viewControllerForLocation:) method, we instantiate a ForceViewController from the storyboard and return this object. The preferredContentSize property determines how large the Peek preview will appear on the screen. When a size of (0, 0) is used, the preview automatically makes itself as large as it can for the current screen.
In the previewingContext(_:commitViewController:) method, we complete the transition from the MasterViewController instance and Pop the ForceViewController we created at the Peek stage on the screen.
To use this new code, we also need to register specific views which we want to generate a preview for when pushed on firmly. To do this, open MasterViewController.swift and add the following code to viewDidLoad():
if traitCollection.forceTouchCapability == .Available {
self.registerForPreviewingWithDelegate(self, sourceView: forceButton)
}
We first check to see if 3D Touch is available on the device (referred to as Force Touch by the API). If that is the case, we register the forceButton (the one at the bottom of the table) as an eligible view to Peek and Pop with.
Finally, to add actions to a preview, you need to define them in the preview'sview controller class. Open ForceViewController.swift and add the following method to the class:
We create three actions to display with the ForceViewController preview. The first is a regular action and is the most common. When this action is selected the (currently empty) block of code you define when creating the action will be executed. The second is a destructive action that will function exactly the same as the first, but it will appear red on the preview screen. Lastly, we create an action group that collapses any number of other actions under a single button.
Build and run your app once again. This time, push firmly on the Force button to see a new ForceViewController preview as shown below.
Swipe up on this preview to see the actions that we defined in the ForceViewController class.
Finally, press the Group... action to open up the actions contained in that group.
3. Detecting Force Through UITouch
On 3D Touch compatible devices, the UITouch class also gains some new functionality in the form of two new properties, force and maximumPossibleForce. These properties are very useful for any use case where you want a precise measurement of how much pressure is being applied to the screen.
Start by adding the following method to the ForceViewController class:
override func touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent?) {
if let touch = touches.first where traitCollection.forceTouchCapability == .Available {
self.forceOutput.text = "\(touch.force)\n\(touch.maximumPossibleForce)"
}
}
We detect whenever a touch on the screen moves, including towards and away from the screen. We retrieve the current force being applied from the first UITouch object in the set and display the values on the screen.
Build and run your app and press the Force button to open the ForceViewController. Push anywhere on the screen with varying amounts of pressure and you will see that the label on the screen updates accordingly to show the current applied force as well as the maximum force that can be applied.
Note that these values are not associated with any physical unit and are independent of the user's 3D Touch sensitivity settings. A value of 1.0 represents the force applied on an average touch.
4. Home Screen Quick Actions
In addition to the new in-app functionality that 3D Touch offers, you can also add up to four shortcuts for specific functions of your application on your app icon. These quick actions can be accessed when a user presses deeply on your app's icon on the home screen as shown in the next screenshot.
There are two main types of quick actions you can create for your app, static and dynamic. Static quick actions are defined in your app's Info.plist and are available at all times for your application. Dynamic quick actions are created in your code and are added to the shared UIApplication object for your app.
For our app, we are going to create both a static and a dynamic quick action that will have the exact same implementation, adding a new item to the table view. It will show you how to utilize both action types in your own applications.
Quick actions are represented by the new UIApplicationShortcutItem class, which has the following properties:
localizedTitle the main title of the quick action (e.g. New Tab in the above screenshot)
localizedSubtitle an optional subtitle for the quick action, which is displayed below the main title
type a unique string identifier for you to use to determine which quick action was selected
icon an optional UIApplicationShortcutIcon object that can display a system provided icon or a custom image
userInfo an optional dictionary, which is useful for associating data with a quick action
Firstly, we are going to create the static quick action. Open the target's Info.plist file and add the following items exactly as shown in the screenshot below:
Note that the UIApplicationShortcutItemIconType key can be swapped with the UIApplicationShortcutItemIconFile key with the value being the image file name you want to use. The UIApplicationShortcutItemUserInfo value we provided is also just a basic example dictionary to show you how you can setup your own custom data.
Next, we are going to create the dynamic action. Open MasterViewController.swift and add the following two lines of code in the viewDidLoad() method:
Just like that, you have created both a static and dynamic quick action for your application.
Lastly, we need to handle our app's logic for when these quick actions are actually selected from the home screen. This is handled by your app delegate's application(_:performActionForShortcutItem:completionHandler:) method. Open AppDelegate.swift and add the following method to the AppDelegate class:
func application(application: UIApplication, performActionForShortcutItem shortcutItem: UIApplicationShortcutItem, completionHandler: (Bool) -> Void) {
if shortcutItem.type == "com.tutsplus.Introducing-3D-Touch.add-item" {
let splitViewController = self.window!.rootViewController as! UISplitViewController
let navigationController = splitViewController.viewControllers[splitViewController.viewControllers.count-1] as! UINavigationController
let masterViewController = navigationController.viewControllers[0] as! MasterViewController
masterViewController.insertNewObject(UIButton())
completionHandler(true)
}
completionHandler(false)
}
We first check the type of the quick action and then access the MasterViewController object. On this object, we call the insertNewObject(_:) method to insert a new item into the table view. Note that this method is provided by the the iOS > Application > Master-Detail Application template and requires an AnyObject parameter. This parameter is not used in the actual method implementation, however, and can be any object. Finally, we call the completionHandler with a boolean value to tell the system whether or not the quick action was executed successfully.
Build and run your app one last time. Once it has loaded, go to your device's home screen and push firmly on the app icon. You will see that two quick actions are available for your app.
Next, press on either one of these and your application should open with a new item added to the table view.
Conclusion
You should now be comfortable with the 3D Touch APIs available in iOS 9, including Peek and Pop, detecting force via UITouch, and home screen quick actions. 3D Touch offers many new ways of interacting with your device and I highly encourage everyone to adopt it within their own applications.
As always, you can leave your comments and feedback below.
With the iPhone 6s and 6s Plus, Apple introduced an entirely new way of interacting with our devices called 3D Touch. 3D Touch works by detecting the amount of pressure that you are applying to your phone's screen in order to perform different actions. In this tutorial, I am going to show you how to take advantage of 3D Touch so that you can utilize this new technology in your own iOS 9 apps.
Prerequisites
This tutorial requires that you are running Xcode 7.1 or later. At the time of writing, the iOS Simulator doesn't support 3D Touch yet, which means that any testing needs to be done on a physical device, iPhone 6s or iPhone 6s Plus. If you'd like to follow along, then start by downloading the starter project from GitHub.
1. Peek and Pop in Storyboards
In this first section, I am going to show you how to implement Peek and Pop functionality in your app using storyboards—and a bit of code. If you don't know what Peek and Pop is, it's basically a way of pressing on a user interface element with a bit more force to get a "Peek" at it.
From such a preview, you can then either lift your finger to dismiss it or push a bit harder again to "Pop" it into full screen. "Peekable" items can be any view controller, including things such as emails, messages, and web pages as shown in the screenshot below.
Open the starter project in Xcode and navigate to Main.storyboard. Zoom the storyboard out by pinching on your trackpad or by pressing Command + -. Select the segue shown in the next screenshot.
With this segue selected, open the Attributes Inspector and look for a new section named Peek and Pop. Enable the checkbox and configure the behavior as shown below.
In this menu, you can assign custom identifiers to both the Peek (Preview) and Pop (Commit) segues. The commit segue also has options to configure everything that you can for a regular segue in storyboards such as Class, Module, and Kind.
Build and run your app on your iPhone and press the + button in the top right corner to create a new item.
Press firmly on the item and you will see that we get a preview of the detail view controller for that item.
You will see that our detail view does not yet show the right data for the item we are previewing. This is because we have not yet configured the view for the custom preview segue we defined in the storyboard. Back in your project, open MasterViewController.swift and replace the prepareForSegue(_:sender:) method with the following implementation:
override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
if segue.identifier == "showDetail" {
if let indexPath = self.tableView.indexPathForSelectedRow {
let object = objects[indexPath.row] as! NSDate
let controller = (segue.destinationViewController as! UINavigationController).topViewController as! DetailViewController
controller.detailItem = object
controller.navigationItem.leftBarButtonItem = self.splitViewController?.displayModeButtonItem()
controller.navigationItem.leftItemsSupplementBackButton = true
}
} else if let cell = sender as? UITableViewCell where segue.identifier == "showDetailPeek" {
let controller = (segue.destinationViewController as! UINavigationController).topViewController as! DetailViewController
controller.detailItem = cell.textLabel?.text
controller.navigationItem.leftBarButtonItem = self.splitViewController?.displayModeButtonItem()
controller.navigationItem.leftItemsSupplementBackButton = true
}
}
The first if statement remains unchanged. If this code isn't executed, we then check to see if the sender is a UITableViewCell and the segue identifier is equal to "showDetailPeek". If both of these conditions are met, we configure the detailItem to the text of the cell.
Build and run your app again. This time, when peeking an item, you should get a properly configured preview as shown below.
One important thing to note is that these storyboard configurations for Peek and Pop will only work on devices running iOS 9.1 or later. To support devices running iOS 9.0, you will need to configure your Peek and Pop functionality in code as shown in the next section.
2. Peek and Pop in Code
While a bit more complicated than the storyboard setup, programmatically implementing Peek and Pop also allows you to add extra actions to your previews when the user swipes up. Take a look at the following screenshot to better understand what I mean.
Peek and pop is handled in code by the UIViewControllerPreviewingDelegate protocol. In your project, create a new iOS > Source > Swift File and name it MasterPreviewing.
We make the MasterViewController class conform to the UIViewControllerPreviewingDelegate protocol.
In the previewingContext(_:viewControllerForLocation:) method, we instantiate a ForceViewController from the storyboard and return this object. The preferredContentSize property determines how large the Peek preview will appear on the screen. When a size of (0, 0) is used, the preview automatically makes itself as large as it can for the current screen.
In the previewingContext(_:commitViewController:) method, we complete the transition from the MasterViewController instance and Pop the ForceViewController we created at the Peek stage on the screen.
To use this new code, we also need to register specific views which we want to generate a preview for when pushed on firmly. To do this, open MasterViewController.swift and add the following code to viewDidLoad():
if traitCollection.forceTouchCapability == .Available {
self.registerForPreviewingWithDelegate(self, sourceView: forceButton)
}
We first check to see if 3D Touch is available on the device (referred to as Force Touch by the API). If that is the case, we register the forceButton (the one at the bottom of the table) as an eligible view to Peek and Pop with.
Finally, to add actions to a preview, you need to define them in the preview'sview controller class. Open ForceViewController.swift and add the following method to the class:
We create three actions to display with the ForceViewController preview. The first is a regular action and is the most common. When this action is selected the (currently empty) block of code you define when creating the action will be executed. The second is a destructive action that will function exactly the same as the first, but it will appear red on the preview screen. Lastly, we create an action group that collapses any number of other actions under a single button.
Build and run your app once again. This time, push firmly on the Force button to see a new ForceViewController preview as shown below.
Swipe up on this preview to see the actions that we defined in the ForceViewController class.
Finally, press the Group... action to open up the actions contained in that group.
3. Detecting Force Through UITouch
On 3D Touch compatible devices, the UITouch class also gains some new functionality in the form of two new properties, force and maximumPossibleForce. These properties are very useful for any use case where you want a precise measurement of how much pressure is being applied to the screen.
Start by adding the following method to the ForceViewController class:
override func touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent?) {
if let touch = touches.first where traitCollection.forceTouchCapability == .Available {
self.forceOutput.text = "\(touch.force)\n\(touch.maximumPossibleForce)"
}
}
We detect whenever a touch on the screen moves, including towards and away from the screen. We retrieve the current force being applied from the first UITouch object in the set and display the values on the screen.
Build and run your app and press the Force button to open the ForceViewController. Push anywhere on the screen with varying amounts of pressure and you will see that the label on the screen updates accordingly to show the current applied force as well as the maximum force that can be applied.
Note that these values are not associated with any physical unit and are independent of the user's 3D Touch sensitivity settings. A value of 1.0 represents the force applied on an average touch.
4. Home Screen Quick Actions
In addition to the new in-app functionality that 3D Touch offers, you can also add up to four shortcuts for specific functions of your application on your app icon. These quick actions can be accessed when a user presses deeply on your app's icon on the home screen as shown in the next screenshot.
There are two main types of quick actions you can create for your app, static and dynamic. Static quick actions are defined in your app's Info.plist and are available at all times for your application. Dynamic quick actions are created in your code and are added to the shared UIApplication object for your app.
For our app, we are going to create both a static and a dynamic quick action that will have the exact same implementation, adding a new item to the table view. It will show you how to utilize both action types in your own applications.
Quick actions are represented by the new UIApplicationShortcutItem class, which has the following properties:
localizedTitle the main title of the quick action (e.g. New Tab in the above screenshot)
localizedSubtitle an optional subtitle for the quick action, which is displayed below the main title
type a unique string identifier for you to use to determine which quick action was selected
icon an optional UIApplicationShortcutIcon object that can display a system provided icon or a custom image
userInfo an optional dictionary, which is useful for associating data with a quick action
Firstly, we are going to create the static quick action. Open the target's Info.plist file and add the following items exactly as shown in the screenshot below:
Note that the UIApplicationShortcutItemIconType key can be swapped with the UIApplicationShortcutItemIconFile key with the value being the image file name you want to use. The UIApplicationShortcutItemUserInfo value we provided is also just a basic example dictionary to show you how you can setup your own custom data.
Next, we are going to create the dynamic action. Open MasterViewController.swift and add the following two lines of code in the viewDidLoad() method:
Just like that, you have created both a static and dynamic quick action for your application.
Lastly, we need to handle our app's logic for when these quick actions are actually selected from the home screen. This is handled by your app delegate's application(_:performActionForShortcutItem:completionHandler:) method. Open AppDelegate.swift and add the following method to the AppDelegate class:
func application(application: UIApplication, performActionForShortcutItem shortcutItem: UIApplicationShortcutItem, completionHandler: (Bool) -> Void) {
if shortcutItem.type == "com.tutsplus.Introducing-3D-Touch.add-item" {
let splitViewController = self.window!.rootViewController as! UISplitViewController
let navigationController = splitViewController.viewControllers[splitViewController.viewControllers.count-1] as! UINavigationController
let masterViewController = navigationController.viewControllers[0] as! MasterViewController
masterViewController.insertNewObject(UIButton())
completionHandler(true)
}
completionHandler(false)
}
We first check the type of the quick action and then access the MasterViewController object. On this object, we call the insertNewObject(_:) method to insert a new item into the table view. Note that this method is provided by the the iOS > Application > Master-Detail Application template and requires an AnyObject parameter. This parameter is not used in the actual method implementation, however, and can be any object. Finally, we call the completionHandler with a boolean value to tell the system whether or not the quick action was executed successfully.
Build and run your app one last time. Once it has loaded, go to your device's home screen and push firmly on the app icon. You will see that two quick actions are available for your app.
Next, press on either one of these and your application should open with a new item added to the table view.
Conclusion
You should now be comfortable with the 3D Touch APIs available in iOS 9, including Peek and Pop, detecting force via UITouch, and home screen quick actions. 3D Touch offers many new ways of interacting with your device and I highly encourage everyone to adopt it within their own applications.
As always, you can leave your comments and feedback below.