It's happened to us all: we change something in our code, and suddenly, everything seems to be "broken." This is when version control is a boon—if you know how to use it. In this tutorial, we'll learn how to use Git from the command line.
Xcode and other modern IDEs have basic options for Git built into their graphical user interface, but you only get high-level control of your repo (Git repository) through the command line. If you're doing advanced coding or Git management, it's important to be comfortable with the command line. If you've never used the command line before, you might like to check out my other tutorial on the topic:
Before we get started, we should review what exactly Version Control is. A Version Control System is a software solution which can easily save revisions of your code and other data so that you can roll back to previous versions, review changes that have been made, and share updates with collaborators.
There are many advantages and use cases for version control. For example, by reviewing the changes ("commits") to your project, you can identify who wrote any particular bit of code and why. You can also roll back any changes that were found to be in error or to break functionality.
The most commonly used version control system today is Git, so that's the one we'll be looking at in this post. Be aware that there are other widely used systems, though—for example, SVN and Mercurial.
Key Terms and Concepts
repository or repo—contains all the code for a single project, along with the entire change history of each file.
working directory—when you edit your code, you're making changes to your working directory. If you want to save these changes to the repo, you'll need to make a commit. If all the changes in the working directory have been committed to the repo, the working directory is clean.
commit—a group of changes to source files. Usually, these changes are grouped together so that each commit pertains to a single bug fix or feature.
branch—the work on a project can be organized into branches. This lets one developer or group of developers work on one feature, while another developer works on another feature.
merge—brings the changes in two branches together. Often, this can be done automatically by Git, but if there are conflicts, you might have to manually tell Git how to merge the files.
Repository Management Services
When you use version control, you create a repository, or repo, and it is most common to host this repo on a Repository Management Service. For the purposes of this tutorial, we won't be hosting our repo anywhere, so you can focus on actually using version control. If you want, though, you can read up on these repository management services, and you should be able to apply this knowledge to them if you wish.
A few examples of these are GitHub, Bitbucket, GitLab, and Coding, and they are widely used by developers all around the world. I, and many others, use GitHub because it hosts a huge number of open-source projects. GitHub repos are public by default, but you can create private repos for a monthly fee.
Getting Started
Creating an Application
To start, you'll need to create a new application in Xcode. For this app, you can use any template you wish, and if you have a current app which has a Git repository, you can use it as well for this tutorial.
Here's what the IDE should look like just before you finally create your project (when you need to decide the location to store the project):
Make sure that the box which says Create Git repository on my Mac is checked, as this will ensure that your project will internally have the repo. Later, if you choose to use a repository management system, you'll be able to push all of this code, and every commit you've ever made will show up.
Opening the Terminal
To get to the command line, you'll need to open the Terminal. You can do this in one of two ways. You can open Launchpad, and there you can find the Terminal icon in the Other folder on the first page of Launchpad. Or you can hit Command-Space on your keyboard and search for Terminal in Spotlight.
Once you open the terminal, you should see something like the following.
This is called the "command prompt"—you'll see the current directory, then your username followed by a $.
All right! You're now ready to learn about how to use version control on the Terminal.
Terminal Commands Cheat Sheet
Here are some of the basic commands I wrote about in my tutorial about getting started with Terminal. You'll need to know these in order to use the terminal effectively.
Help
help—as the name suggests, you can type this command into the Terminal to get a list of different commands.
man <command name>—similar to the previous command, this command tells you exactly what a command does (and gives you full documentation) without you having to search Google for it.
File Management
ls—lists all of the contents in the current directory. This command comes in handy if you don't want to use the Finder to browse files—you can simply list them using this command in your Terminal.
cd <directory name>—this command is used to change directories. If you write cd alone, you will move out of the current directory. After using ls (to see the directories), you can write the name of the directory you want to enter.
Changing Preferences
defaults <setting to change>—this command is used to modify default system settings, some of which cannot be changed without using the terminal.
caffeinate—as the name suggests, this command is used to prevent your Mac from dimming, turning off, or sleeping. To end this, all you need to do is press Control-C.
Text Editing
vim <file name>—this is one of my favorites. You can edit text files using the default TextEdit (or any GUI-based editor), but vim is basically a command-line text editor—that is, it works entirely within the terminal.
Networking
ping <URL or IP Address>—this command allows you to check the server response time of a specified URL or IP address. This may not be useful for every developer, but it is nice to know.
Admin
sudo <action to perform>—a way to override your user's privileges and become a superuser of the computer. You will be prompted for an admin password when you use it.
Different Git Commands
Now that you've seen some basic terminal commands, let's learn about the different things you can do with Git. I won't be covering them all, but I will teach you about the main ones which you'll use in day-to-day development. If you ever need any more information, you can just run git help in your terminal for details, and if that's not enough, for full documentation, you can run man git to get the manual page.
Here's what the help pages look like:
Creating a Project
git clone {remote-repo-link}—if you want to clone a repository from a repository management service, you can use this command along with the URL to get a local copy on your computer.
git init—if you're creating a new repository from an existing folder, you can use this command. It will initialize the current folder to be a new repo. Usually, you would do this when you create a project for the first time.
Committing Changes
git status—tells you what files in your working directory have been changed. If you have changed files, it might be time to make a commit!
git commit -am "{helpful commit message}"—when you've made some changes to your project (for example, when you've completed a simple feature or made a bug fix), you should commit your change. Be sure to provide a concise and clear commit message, as this will help other developers understand what you have done and why.
git add {filename} or git add --all—if you add a new file to your repo, you'll have to add it to the repo before you commit it. Use the add command.
Repository Branches
git branch {branch-name?}—with this command, you can either list the current branches or create a new one.
git merge {branch-name}—merge the current branch with the branch indicated. This will combine the code in the current branch with the named one.
git checkout {branch-name}—switch to the indicated branch. This will simply put the current branch aside and make the other branch active.
Repository Management Systems
git push—updates the repository in the repository management system. After you're done making changes and are sure your code is working well, you can push your code so that other members can see the code and pull it.
git pull—updates your local working copy of the repo to reflect the latest updates that have been pushed to the repository management system. It's a good idea to do this before you make any changes if you're working on a team.
Those are a few of the main commands that you'll be using in version control to get started, but before we end this tutorial, let's take a look at a few of these in depth with the Xcode project that we created earlier.
Example of Using Git With an Xcode Project
Now, let's take a look at a few examples of how to use command-line Git with the Xcode project that we created earlier in this tutorial. Note that we'll be using the terminal commands above, so make sure you keep referring to them or memorize them.
Navigating to the Project Directory
Before we begin, you'll need to navigate to the project directory using the commands that are mentioned above (hint: use the cd and ls commands). Once you are there, run ls and make sure that you have something like this:
Ta-da! Now you're in your project directory and ready to do anything you need to with your project. Simply follow along as we commit and branch.
Commit
Committing your code is the thing you'll do most often in programming. As soon as you make a working change, the best practice is to commit it along with detailed comments.
Making a Change
To start off, make a change to the Xcode project. For mine, I'll just add the following line of dummy code in my viewDidLoad() method:
let foobar = "foo"
Getting the Status
After you've finished adding (or subtracting) a line or two of code, you're ready to check the status of your project. To do this, paste the following command (minus the $ prompt) into your Terminal window:
$ git status
You'll see something like this:
The file you've modified is highlighted in red, which tells you it has uncommitted changes.
Adding Files to Staging Area
If you want to only commit certain files, you can do so using a "staging area" where only those files will be committed. To add all the modified files to the "staging area," all you need to do is run the following line of code:
$ git add -A
The -A flag that you see means that all of the files that you have modified will get added (-A is for all, and you can also write git add --all).
To see that your files are ready to commit, simply run the following again:
$ git status
You'll see something like this:
See? The same file which was red is now green, which indicates that you've successfully prepared it to commit.
Committing Your Code
Lastly, to finally commit your code, all you'll need to do is run the following command in your terminal, and between the quotes, add a message.
$ git commit -m "My very first commit."
The -m flag tells the Terminal that you'll be adding a message to your file, and it's pretty important to be clear with this message. These messages are crucial for keeping track of the changes to your project.
Now, you've made your very first commit! You're on the right track to making your code easier and more secure.
Branches
The second most common thing you'll do as a developer is to create, merge, and use branches to sort out your code and isolate features before you roll them out to customers.
Creating a New Branch
By default, you're in what we call the "master" branch. That is the main branch which, eventually, everything should land up in. Best practice, especially working with a team, is to work on new major features in their own branches, which will be merged back into master when complete.
To practice working with branches, let's create a new branch. To do this, run the following command:
$ git branch my-first-branch
You can name the branch whatever you'd like.
To see the new branch, you can type:
$ git branch
When you run that command, you'll see something like this:
Notice that you can see two branches: master and my-first-branch (or whatever you named your branch). Additionally, you'll see that there is an asterisk by the master branch, which indicates that you're currently in that branch.
Checking Out a Current Branch
If you ever need to switch to another existing branch, you will need to checkout that branch. When you do this, you're leaving the current branch, and all of its code will remain intact (along with your local changes), but your working directory will be populated with the code from the branch you checked out.
Try it out with the following command:
$ git checkout my-first-branch
You should get a confirmation which looks something like this:
Now, you're switched to this branch, and it should be a clean slate. To confirm this, run git status to check if there are any modified files.
Merging Branches
After you're done making changes, you would normally merge the branch to the master branch. We haven't yet made any changes, so let's do that now before we merge the two branches.
Make another change to the Xcode project. For mine, I'll just add the following line of dummy code in my viewDidLoad() method:
let gooey = "fooey"
You can make any change you like. Just make sure you know what file and what change you made.
After that's done, run the following line of code again:
$ git status
Now, you should see the filename in red, and you will need to commit before merging this change back to your main branch. I trust that you know how to do that, so let's skip ahead to the next step. Double check that the commit was successful with git status.
By now, you should have committed the code, so let's get ready to merge the two branches. First, run the following command:
$ git checkout master
This command switches to the master branch to prepare to merge with the other branch that we created. Finally, to merge, run the following command:
$ git merge my-first-branch
You should get a confirmation which looks like this:
Now your changes from the feature branch have been merged back into master. If the master branch has changed since the branch was created, Git will try to automatically combine your feature branch changes with master. If it cannot do so automatically, it will prompt you to manually resolve any conflicts.
Now you know how to merge branches, create them, and switch between them using just the Terminal!
Conclusion
As you see, it's not too difficult to do version control with your project, and the rewards are well worth it. Version control is a core development best practice, and you should be familiar with it if you want to work in a professional context.
I hope this post has given you the confidence to use version control on a day-to-day basis. If you want to learn more about Git, check out some of our animated instructional videos here on Envato Tuts+.
It's happened to us all: we change something in our code, and suddenly, everything seems to be "broken." This is when version control is a boon—if you know how to use it. In this tutorial, we'll learn how to use Git from the command line.
Xcode and other modern IDEs have basic options for Git built into their graphical user interface, but you only get high-level control of your repo (Git repository) through the command line. If you're doing advanced coding or Git management, it's important to be comfortable with the command line. If you've never used the command line before, you might like to check out my other tutorial on the topic:
Before we get started, we should review what exactly Version Control is. A Version Control System is a software solution which can easily save revisions of your code and other data so that you can roll back to previous versions, review changes that have been made, and share updates with collaborators.
There are many advantages and use cases for version control. For example, by reviewing the changes ("commits") to your project, you can identify who wrote any particular bit of code and why. You can also roll back any changes that were found to be in error or to break functionality.
The most commonly used version control system today is Git, so that's the one we'll be looking at in this post. Be aware that there are other widely used systems, though—for example, SVN and Mercurial.
Key Terms and Concepts
repository or repo—contains all the code for a single project, along with the entire change history of each file.
working directory—when you edit your code, you're making changes to your working directory. If you want to save these changes to the repo, you'll need to make a commit. If all the changes in the working directory have been committed to the repo, the working directory is clean.
commit—a group of changes to source files. Usually, these changes are grouped together so that each commit pertains to a single bug fix or feature.
branch—the work on a project can be organized into branches. This lets one developer or group of developers work on one feature, while another developer works on another feature.
merge—brings the changes in two branches together. Often, this can be done automatically by Git, but if there are conflicts, you might have to manually tell Git how to merge the files.
Repository Management Services
When you use version control, you create a repository, or repo, and it is most common to host this repo on a Repository Management Service. For the purposes of this tutorial, we won't be hosting our repo anywhere, so you can focus on actually using version control. If you want, though, you can read up on these repository management services, and you should be able to apply this knowledge to them if you wish.
A few examples of these are GitHub, Bitbucket, GitLab, and Coding, and they are widely used by developers all around the world. I, and many others, use GitHub because it hosts a huge number of open-source projects. GitHub repos are public by default, but you can create private repos for a monthly fee.
Getting Started
Creating an Application
To start, you'll need to create a new application in Xcode. For this app, you can use any template you wish, and if you have a current app which has a Git repository, you can use it as well for this tutorial.
Here's what the IDE should look like just before you finally create your project (when you need to decide the location to store the project):
Make sure that the box which says Create Git repository on my Mac is checked, as this will ensure that your project will internally have the repo. Later, if you choose to use a repository management system, you'll be able to push all of this code, and every commit you've ever made will show up.
Opening the Terminal
To get to the command line, you'll need to open the Terminal. You can do this in one of two ways. You can open Launchpad, and there you can find the Terminal icon in the Other folder on the first page of Launchpad. Or you can hit Command-Space on your keyboard and search for Terminal in Spotlight.
Once you open the terminal, you should see something like the following.
This is called the "command prompt"—you'll see the current directory, then your username followed by a $.
All right! You're now ready to learn about how to use version control on the Terminal.
Terminal Commands Cheat Sheet
Here are some of the basic commands I wrote about in my tutorial about getting started with Terminal. You'll need to know these in order to use the terminal effectively.
Help
help—as the name suggests, you can type this command into the Terminal to get a list of different commands.
man <command name>—similar to the previous command, this command tells you exactly what a command does (and gives you full documentation) without you having to search Google for it.
File Management
ls—lists all of the contents in the current directory. This command comes in handy if you don't want to use the Finder to browse files—you can simply list them using this command in your Terminal.
cd <directory name>—this command is used to change directories. If you write cd alone, you will move out of the current directory. After using ls (to see the directories), you can write the name of the directory you want to enter.
Changing Preferences
defaults <setting to change>—this command is used to modify default system settings, some of which cannot be changed without using the terminal.
caffeinate—as the name suggests, this command is used to prevent your Mac from dimming, turning off, or sleeping. To end this, all you need to do is press Control-C.
Text Editing
vim <file name>—this is one of my favorites. You can edit text files using the default TextEdit (or any GUI-based editor), but vim is basically a command-line text editor—that is, it works entirely within the terminal.
Networking
ping <URL or IP Address>—this command allows you to check the server response time of a specified URL or IP address. This may not be useful for every developer, but it is nice to know.
Admin
sudo <action to perform>—a way to override your user's privileges and become a superuser of the computer. You will be prompted for an admin password when you use it.
Different Git Commands
Now that you've seen some basic terminal commands, let's learn about the different things you can do with Git. I won't be covering them all, but I will teach you about the main ones which you'll use in day-to-day development. If you ever need any more information, you can just run git help in your terminal for details, and if that's not enough, for full documentation, you can run man git to get the manual page.
Here's what the help pages look like:
Creating a Project
git clone {remote-repo-link}—if you want to clone a repository from a repository management service, you can use this command along with the URL to get a local copy on your computer.
git init—if you're creating a new repository from an existing folder, you can use this command. It will initialize the current folder to be a new repo. Usually, you would do this when you create a project for the first time.
Committing Changes
git status—tells you what files in your working directory have been changed. If you have changed files, it might be time to make a commit!
git commit -am "{helpful commit message}"—when you've made some changes to your project (for example, when you've completed a simple feature or made a bug fix), you should commit your change. Be sure to provide a concise and clear commit message, as this will help other developers understand what you have done and why.
git add {filename} or git add --all—if you add a new file to your repo, you'll have to add it to the repo before you commit it. Use the add command.
Repository Branches
git branch {branch-name?}—with this command, you can either list the current branches or create a new one.
git merge {branch-name}—merge the current branch with the branch indicated. This will combine the code in the current branch with the named one.
git checkout {branch-name}—switch to the indicated branch. This will simply put the current branch aside and make the other branch active.
Repository Management Systems
git push—updates the repository in the repository management system. After you're done making changes and are sure your code is working well, you can push your code so that other members can see the code and pull it.
git pull—updates your local working copy of the repo to reflect the latest updates that have been pushed to the repository management system. It's a good idea to do this before you make any changes if you're working on a team.
Those are a few of the main commands that you'll be using in version control to get started, but before we end this tutorial, let's take a look at a few of these in depth with the Xcode project that we created earlier.
Example of Using Git With an Xcode Project
Now, let's take a look at a few examples of how to use command-line Git with the Xcode project that we created earlier in this tutorial. Note that we'll be using the terminal commands above, so make sure you keep referring to them or memorize them.
Navigating to the Project Directory
Before we begin, you'll need to navigate to the project directory using the commands that are mentioned above (hint: use the cd and ls commands). Once you are there, run ls and make sure that you have something like this:
Ta-da! Now you're in your project directory and ready to do anything you need to with your project. Simply follow along as we commit and branch.
Commit
Committing your code is the thing you'll do most often in programming. As soon as you make a working change, the best practice is to commit it along with detailed comments.
Making a Change
To start off, make a change to the Xcode project. For mine, I'll just add the following line of dummy code in my viewDidLoad() method:
let foobar = "foo"
Getting the Status
After you've finished adding (or subtracting) a line or two of code, you're ready to check the status of your project. To do this, paste the following command (minus the $ prompt) into your Terminal window:
$ git status
You'll see something like this:
The file you've modified is highlighted in red, which tells you it has uncommitted changes.
Adding Files to Staging Area
If you want to only commit certain files, you can do so using a "staging area" where only those files will be committed. To add all the modified files to the "staging area," all you need to do is run the following line of code:
$ git add -A
The -A flag that you see means that all of the files that you have modified will get added (-A is for all, and you can also write git add --all).
To see that your files are ready to commit, simply run the following again:
$ git status
You'll see something like this:
See? The same file which was red is now green, which indicates that you've successfully prepared it to commit.
Committing Your Code
Lastly, to finally commit your code, all you'll need to do is run the following command in your terminal, and between the quotes, add a message.
$ git commit -m "My very first commit."
The -m flag tells the Terminal that you'll be adding a message to your file, and it's pretty important to be clear with this message. These messages are crucial for keeping track of the changes to your project.
Now, you've made your very first commit! You're on the right track to making your code easier and more secure.
Branches
The second most common thing you'll do as a developer is to create, merge, and use branches to sort out your code and isolate features before you roll them out to customers.
Creating a New Branch
By default, you're in what we call the "master" branch. That is the main branch which, eventually, everything should land up in. Best practice, especially working with a team, is to work on new major features in their own branches, which will be merged back into master when complete.
To practice working with branches, let's create a new branch. To do this, run the following command:
$ git branch my-first-branch
You can name the branch whatever you'd like.
To see the new branch, you can type:
$ git branch
When you run that command, you'll see something like this:
Notice that you can see two branches: master and my-first-branch (or whatever you named your branch). Additionally, you'll see that there is an asterisk by the master branch, which indicates that you're currently in that branch.
Checking Out a Current Branch
If you ever need to switch to another existing branch, you will need to checkout that branch. When you do this, you're leaving the current branch, and all of its code will remain intact (along with your local changes), but your working directory will be populated with the code from the branch you checked out.
Try it out with the following command:
$ git checkout my-first-branch
You should get a confirmation which looks something like this:
Now, you're switched to this branch, and it should be a clean slate. To confirm this, run git status to check if there are any modified files.
Merging Branches
After you're done making changes, you would normally merge the branch to the master branch. We haven't yet made any changes, so let's do that now before we merge the two branches.
Make another change to the Xcode project. For mine, I'll just add the following line of dummy code in my viewDidLoad() method:
let gooey = "fooey"
You can make any change you like. Just make sure you know what file and what change you made.
After that's done, run the following line of code again:
$ git status
Now, you should see the filename in red, and you will need to commit before merging this change back to your main branch. I trust that you know how to do that, so let's skip ahead to the next step. Double check that the commit was successful with git status.
By now, you should have committed the code, so let's get ready to merge the two branches. First, run the following command:
$ git checkout master
This command switches to the master branch to prepare to merge with the other branch that we created. Finally, to merge, run the following command:
$ git merge my-first-branch
You should get a confirmation which looks like this:
Now your changes from the feature branch have been merged back into master. If the master branch has changed since the branch was created, Git will try to automatically combine your feature branch changes with master. If it cannot do so automatically, it will prompt you to manually resolve any conflicts.
Now you know how to merge branches, create them, and switch between them using just the Terminal!
Conclusion
As you see, it's not too difficult to do version control with your project, and the rewards are well worth it. Version control is a core development best practice, and you should be familiar with it if you want to work in a professional context.
I hope this post has given you the confidence to use version control on a day-to-day basis. If you want to learn more about Git, check out some of our animated instructional videos here on Envato Tuts+.
Cloud Firestore is a recent addition to the Firebase family of products. Although still in beta, it's already being presented by Google as a more flexible and feature-rich alternative to the Firebase Realtime Database.
If you've ever used the Realtime Database, you're probably aware that it's essentially a large JSON document best suited for storing simple key-value pairs only. Storing hierarchical data on it efficiently and securely, although possible, is quite cumbersome and requires a well-thought-out strategy, which usually involves flattening the data as much as possible or denormalizing it. Without such a strategy, queries on the Realtime Database are likely to consume unnecessarily large amounts of bandwidth.
Cloud Firestore, being more akin to document-oriented databases such as MongoDB and CouchDB, has no such problems. Moreover, it comes with a large number of very handy features, such as support for batch operations, atomic writes, and indexed queries.
In this tutorial, I'll help you get started with using Cloud Firestore on the Android platform.
and a device or emulator running Android 4.4 or higher
1. Creating a Firebase Project
Before you use Firebase products in your Android app, you must create a new project for it in the Firebase console. To do so, log in to the console and press the Add project button in the welcome screen.
In the dialog that pops up, give a meaningful name to the project, optionally give a meaningful ID to it, and press the Create Project button.
Once the project has been created, you can set Firestore as its database by navigating to Develop > Database and pressing the Try Firestore Beta button.
In the next screen, make sure you choose the Start in test mode option and press the Enable button.
At this point, you'll have an empty Firestore database all ready to be used in your app.
2. Configuring the Android Project
Your Android Studio project still knows nothing about the Firebase project you created in the previous step. The easiest way to establish a connection between the two is to use Android Studio's Firebase Assistant.
Go to Tools > Firebase to open the Assistant.
Because Firestore is still in beta, the Assistant doesn't support it yet. Nevertheless, by adding Firebase Analytics to your app, you'll be able to automate most of the required configuration steps.
Start by clicking on the Log an Analytics Event link under the Analytics section and pressing the Connect to Firebase button. A new browser window should now pop up asking you if you want to allow Android Studio to, among other things, manage Firebase data.
Press the Allow button to continue.
Back in Android Studio, in the dialog that pops up, select the Choose an existing Firebase or Google project option, pick the Firebase project you created earlier, and press the Connect to Firebase button.
Next, press the Add Analytics to your app button to add the core Firebase dependencies to your project.
Finally, to add Firestore as an implementation dependency, add the following line in the app module's build.gradle file:
Don't forget to press the Sync Now button to complete the configuration. If you encounter any version conflict errors during the sync process, make sure that the versions of the Firestore dependency and the Firebase Core dependency are identical and try again.
3. Understanding Documents and Collections
Firestore is a NoSQL database that allows you to store data in the form of JSON-like documents. However, a document stored on it cannot exist independently. It must always belong to a collection. As its name suggests, a collection is nothing but a bunch of documents.
Documents within a collection are obviously siblings. If you want to establish parent-child relationships between them, though, you must use subcollections. A subcollection is just a collection that belongs to a document. By default, a document automatically becomes the parent of all the documents that belong to its subcollections.
It is also worth noting that Firestore manages the creation and deletion of both collections and subcollections by itself. Whenever you try to add a document to a non-existent collection, it creates the collection. Similarly, once you delete all the documents from a collection, it deletes it.
4. Creating Documents
To be able to write to the Firestore database from your Android app, you must first get a reference to it by calling the getInstance() method of the FirebaseFirestore class.
val myDB = FirebaseFirestore.getInstance()
Next, you must either create a new collection or get a reference to an existing collection, by calling the collection() method. For example, on an empty database, the following code creates a new collection named solar_system:
val solarSystem = myDB.collection("solar_system")
Once you have a reference to a collection, you can start adding documents to it by calling its add() method, which expects a map as its argument.
// Add a document
solarSystem.add(mapOf(
"name" to "Mercury",
"number" to 1,
"gravity" to 3.7
))
// Add another document
solarSystem.add(mapOf(
"name" to "Venus",
"number" to 2,
"gravity" to 8.87
))
The add() method automatically generates and assigns a unique alphanumeric identifier to every document it creates. If you want your documents to have your own custom IDs instead, you must first manually create those documents by calling the document() method, which takes a unique ID string as its input. You can then populate the documents by calling the set() method, which, like the add method, expects a map as its only argument.
For example, the following code creates and populates a new document called PLANET_EARTH:
solarSystem.document("PLANET_EARTH")
.set(mapOf(
"name" to "Earth",
"number" to 3,
"gravity" to 9.807
))
If you go to the Firebase console and take a look at the contents of the database, you'll be able to spot the custom ID easily.
Beware that if the custom ID you pass to the document() method already exists in the database, the set() method will overwrite the contents of the associated document.
5. Creating Subcollections
Support for subcollections is one of the most powerful features of Firestore and is what makes it markedly different from the Firebase Realtime Database. Using subcollections, you can not only easily add nested structures to your data but also be sure that your queries will consume minimal amounts of bandwidth.
Creating a subcollection is much like creating a collection. All you need to do is call the collection() method on a DocumentReference object and pass a string to it, which will be used as the name of the subcollection.
For example, the following code creates a subcollection called satellites and associates it with the PLANET_EARTH document:
val satellitesOfEarth = solarSystem.document("PLANET_EARTH")
.collection("satellites")
Once you have a reference to a subcollection, you are free to call the add() or set() methods to add documents to it.
satellitesOfEarth.add(mapOf(
"name" to "The Moon",
"gravity" to 1.62,
"radius" to 1738
))
After you run the above code, the PLANET_EARTH document will look like this in the Firebase console:
6. Running Queries
Performing a read operation on your Firestore database is very easy if you know the ID of the document you want to read. Why? Because you can directly get a reference to the document by calling the collection() and document() methods. For instance, here's how you can get a reference to the PLANET_EARTH document that belongs to the solar_system collection:
val planetEarthDoc = myDB.collection("solar_system")
.document("PLANET_EARTH")
To actually read the contents of the document, you must call the asynchronous get() method, which returns a Task. By adding an OnSuccessListener to it, you can be notified when the read operation completes successfully.
The result of a read operation is a DocumentSnapshot object, which contains the key-value pairs present in the document. By using its get() method, you can get the value of any valid key. The following example shows you how:
planetEarthDoc.get().addOnSuccessListener {
println(
"Gravity of ${it.get("name")} is ${it.get("gravity")} m/s/s"
)
}
// OUTPUT:
// Gravity of Earth is 9.807 m/s/s
If you don't know the ID of the document you want to read, you will have to run a traditional query on an entire collection. The Firestore API provides intuitively named filter methods such as whereEqualTo(), whereLessThan(), and whereGreaterThan(). Because the filter methods can return multiple documents as their results, you'll need a loop inside your OnSuccessListener to handle each result.
For example, to get the contents of the document for planet Venus, which we added in an earlier step, you could use the following code:
myDB.collection("solar_system")
.whereEqualTo("name", "Venus")
.get().addOnSuccessListener {
it.forEach {
println(
"Gravity of ${it.get("name")} is ${it.get("gravity")} m/s/s"
)
}
}
// OUTPUT:
// Gravity of Venus is 8.87 m/s/s
Lastly, if you are interested in reading all the documents that belong to a collection, you can directly call the get() method on the collection. For instance, here's how you can list all the planets present in the solar_system collection:
Note that, by default, there is no definite order in which the results are returned. If you want to order them based on a key that's present in all the results, you can make use of the orderBy() method. The following code orders the results based on the value of the number key:
Deleting multiple documents—documents you get as the result of a query—is slightly more complicated because there's no built-in method for doing so. There are two different approaches you can follow.
The easiest and most intuitive approach—though one that's suitable only for a very small number of documents—is to loop through the results, get a reference to each document, and then call the delete() method. Here's how you can use the approach to delete all the documents in the solar_system collection:
A more efficient and scalable approach is to use a batch operation. Batch operations can not only delete multiple documents atomically but also significantly reduce the number of network connections required.
To create a new batch, you must call the batch() method of your database, which returns an instance of the WriteBatch class. Then, you can loop through all the results of the query and mark them for deletion by passing them to the delete() method of the WriteBatch object. Finally, to actually start the deletion process, you can call the commit() method. The following code shows you how:
myDB.collection("solar_system")
.get().addOnSuccessListener {
// Create batch
val myBatch = myDB.batch()
// Add documents to batch
it.forEach {
myBatch.delete(it.reference)
}
// Run batch
myBatch.commit()
}
Note that trying to add too many documents to a single batch operation can lead to out-of-memory errors. Therefore, if your query is likely to return a large number of documents, you must make sure you split them into multiple batches
Conclusion
In this introductory tutorial, you learned how to perform read and write operations on the Google Cloud Firestore. I suggest you start using it in your Android projects right away. There's a good chance that it will replace the Realtime Database in the future. In fact, Google already says that by the time it comes out of beta, it will be much more reliable and scalable than the Realtime Database.
Cloud Firestore is a recent addition to the Firebase family of products. Although still in beta, it's already being presented by Google as a more flexible and feature-rich alternative to the Firebase Realtime Database.
If you've ever used the Realtime Database, you're probably aware that it's essentially a large JSON document best suited for storing simple key-value pairs only. Storing hierarchical data on it efficiently and securely, although possible, is quite cumbersome and requires a well-thought-out strategy, which usually involves flattening the data as much as possible or denormalizing it. Without such a strategy, queries on the Realtime Database are likely to consume unnecessarily large amounts of bandwidth.
Cloud Firestore, being more akin to document-oriented databases such as MongoDB and CouchDB, has no such problems. Moreover, it comes with a large number of very handy features, such as support for batch operations, atomic writes, and indexed queries.
In this tutorial, I'll help you get started with using Cloud Firestore on the Android platform.
and a device or emulator running Android 4.4 or higher
1. Creating a Firebase Project
Before you use Firebase products in your Android app, you must create a new project for it in the Firebase console. To do so, log in to the console and press the Add project button in the welcome screen.
In the dialog that pops up, give a meaningful name to the project, optionally give a meaningful ID to it, and press the Create Project button.
Once the project has been created, you can set Firestore as its database by navigating to Develop > Database and pressing the Try Firestore Beta button.
In the next screen, make sure you choose the Start in test mode option and press the Enable button.
At this point, you'll have an empty Firestore database all ready to be used in your app.
2. Configuring the Android Project
Your Android Studio project still knows nothing about the Firebase project you created in the previous step. The easiest way to establish a connection between the two is to use Android Studio's Firebase Assistant.
Go to Tools > Firebase to open the Assistant.
Because Firestore is still in beta, the Assistant doesn't support it yet. Nevertheless, by adding Firebase Analytics to your app, you'll be able to automate most of the required configuration steps.
Start by clicking on the Log an Analytics Event link under the Analytics section and pressing the Connect to Firebase button. A new browser window should now pop up asking you if you want to allow Android Studio to, among other things, manage Firebase data.
Press the Allow button to continue.
Back in Android Studio, in the dialog that pops up, select the Choose an existing Firebase or Google project option, pick the Firebase project you created earlier, and press the Connect to Firebase button.
Next, press the Add Analytics to your app button to add the core Firebase dependencies to your project.
Finally, to add Firestore as an implementation dependency, add the following line in the app module's build.gradle file:
Don't forget to press the Sync Now button to complete the configuration. If you encounter any version conflict errors during the sync process, make sure that the versions of the Firestore dependency and the Firebase Core dependency are identical and try again.
3. Understanding Documents and Collections
Firestore is a NoSQL database that allows you to store data in the form of JSON-like documents. However, a document stored on it cannot exist independently. It must always belong to a collection. As its name suggests, a collection is nothing but a bunch of documents.
Documents within a collection are obviously siblings. If you want to establish parent-child relationships between them, though, you must use subcollections. A subcollection is just a collection that belongs to a document. By default, a document automatically becomes the parent of all the documents that belong to its subcollections.
It is also worth noting that Firestore manages the creation and deletion of both collections and subcollections by itself. Whenever you try to add a document to a non-existent collection, it creates the collection. Similarly, once you delete all the documents from a collection, it deletes it.
4. Creating Documents
To be able to write to the Firestore database from your Android app, you must first get a reference to it by calling the getInstance() method of the FirebaseFirestore class.
val myDB = FirebaseFirestore.getInstance()
Next, you must either create a new collection or get a reference to an existing collection, by calling the collection() method. For example, on an empty database, the following code creates a new collection named solar_system:
val solarSystem = myDB.collection("solar_system")
Once you have a reference to a collection, you can start adding documents to it by calling its add() method, which expects a map as its argument.
// Add a document
solarSystem.add(mapOf(
"name" to "Mercury",
"number" to 1,
"gravity" to 3.7
))
// Add another document
solarSystem.add(mapOf(
"name" to "Venus",
"number" to 2,
"gravity" to 8.87
))
The add() method automatically generates and assigns a unique alphanumeric identifier to every document it creates. If you want your documents to have your own custom IDs instead, you must first manually create those documents by calling the document() method, which takes a unique ID string as its input. You can then populate the documents by calling the set() method, which, like the add method, expects a map as its only argument.
For example, the following code creates and populates a new document called PLANET_EARTH:
solarSystem.document("PLANET_EARTH")
.set(mapOf(
"name" to "Earth",
"number" to 3,
"gravity" to 9.807
))
If you go to the Firebase console and take a look at the contents of the database, you'll be able to spot the custom ID easily.
Beware that if the custom ID you pass to the document() method already exists in the database, the set() method will overwrite the contents of the associated document.
5. Creating Subcollections
Support for subcollections is one of the most powerful features of Firestore and is what makes it markedly different from the Firebase Realtime Database. Using subcollections, you can not only easily add nested structures to your data but also be sure that your queries will consume minimal amounts of bandwidth.
Creating a subcollection is much like creating a collection. All you need to do is call the collection() method on a DocumentReference object and pass a string to it, which will be used as the name of the subcollection.
For example, the following code creates a subcollection called satellites and associates it with the PLANET_EARTH document:
val satellitesOfEarth = solarSystem.document("PLANET_EARTH")
.collection("satellites")
Once you have a reference to a subcollection, you are free to call the add() or set() methods to add documents to it.
satellitesOfEarth.add(mapOf(
"name" to "The Moon",
"gravity" to 1.62,
"radius" to 1738
))
After you run the above code, the PLANET_EARTH document will look like this in the Firebase console:
6. Running Queries
Performing a read operation on your Firestore database is very easy if you know the ID of the document you want to read. Why? Because you can directly get a reference to the document by calling the collection() and document() methods. For instance, here's how you can get a reference to the PLANET_EARTH document that belongs to the solar_system collection:
val planetEarthDoc = myDB.collection("solar_system")
.document("PLANET_EARTH")
To actually read the contents of the document, you must call the asynchronous get() method, which returns a Task. By adding an OnSuccessListener to it, you can be notified when the read operation completes successfully.
The result of a read operation is a DocumentSnapshot object, which contains the key-value pairs present in the document. By using its get() method, you can get the value of any valid key. The following example shows you how:
planetEarthDoc.get().addOnSuccessListener {
println(
"Gravity of ${it.get("name")} is ${it.get("gravity")} m/s/s"
)
}
// OUTPUT:
// Gravity of Earth is 9.807 m/s/s
If you don't know the ID of the document you want to read, you will have to run a traditional query on an entire collection. The Firestore API provides intuitively named filter methods such as whereEqualTo(), whereLessThan(), and whereGreaterThan(). Because the filter methods can return multiple documents as their results, you'll need a loop inside your OnSuccessListener to handle each result.
For example, to get the contents of the document for planet Venus, which we added in an earlier step, you could use the following code:
myDB.collection("solar_system")
.whereEqualTo("name", "Venus")
.get().addOnSuccessListener {
it.forEach {
println(
"Gravity of ${it.get("name")} is ${it.get("gravity")} m/s/s"
)
}
}
// OUTPUT:
// Gravity of Venus is 8.87 m/s/s
Lastly, if you are interested in reading all the documents that belong to a collection, you can directly call the get() method on the collection. For instance, here's how you can list all the planets present in the solar_system collection:
Note that, by default, there is no definite order in which the results are returned. If you want to order them based on a key that's present in all the results, you can make use of the orderBy() method. The following code orders the results based on the value of the number key:
Deleting multiple documents—documents you get as the result of a query—is slightly more complicated because there's no built-in method for doing so. There are two different approaches you can follow.
The easiest and most intuitive approach—though one that's suitable only for a very small number of documents—is to loop through the results, get a reference to each document, and then call the delete() method. Here's how you can use the approach to delete all the documents in the solar_system collection:
A more efficient and scalable approach is to use a batch operation. Batch operations can not only delete multiple documents atomically but also significantly reduce the number of network connections required.
To create a new batch, you must call the batch() method of your database, which returns an instance of the WriteBatch class. Then, you can loop through all the results of the query and mark them for deletion by passing them to the delete() method of the WriteBatch object. Finally, to actually start the deletion process, you can call the commit() method. The following code shows you how:
myDB.collection("solar_system")
.get().addOnSuccessListener {
// Create batch
val myBatch = myDB.batch()
// Add documents to batch
it.forEach {
myBatch.delete(it.reference)
}
// Run batch
myBatch.commit()
}
Note that trying to add too many documents to a single batch operation can lead to out-of-memory errors. Therefore, if your query is likely to return a large number of documents, you must make sure you split them into multiple batches
Conclusion
In this introductory tutorial, you learned how to perform read and write operations on the Google Cloud Firestore. I suggest you start using it in your Android projects right away. There's a good chance that it will replace the Realtime Database in the future. In fact, Google already says that by the time it comes out of beta, it will be much more reliable and scalable than the Realtime Database.
Use of smartphones has seen some explosive growth over the last decade. For this reason, many companies and independent developers see publishing an app on either Google Play or Apple's app store as a good way of making money.
This, in turn, has flooded the play store and app store with over 2 million apps each. There are hundreds and thousands of apps that all do about the same thing. While some of those apps have been developed by amateurs, others have been created by professionals. The cut-throat competition makes it very hard for new apps to become popular.
To stand out, you have to provide a great experience that compels users to give you nothing less than a five-star rating. Not only that, but you also have to get rid of bugs in your app as quickly as possible, so that any frustrated users don't end up giving you poor ratings.
This requires you to have access to detailed bug reports, which would be possible only if you knew the steps a user followed and many other device-related details and logs. Having access to network logs and allowing users or beta testers to file bug reports directly from within the app would significantly speed up the process.
In this tutorial, you will learn about a tool called Instabug, which does exactly that.
Getting Started With Instabug
The good news is that you don't need to do a lot of work to follow this tutorial and see how Instabug works. As you will see, the integration process is quite easy. You can use Instabug free of cost for the first 14 days, so you can just go and sign up for the service.
After signing up, you will be asked to integrate the SDK into your app. I will be using an Android app to show you all the features of Instabug, but you can easily integrate it with native iOS apps or Hybrid apps.
You don't even need to have an app at first. Just download the sample app provided by Instabug and start seeing bug reports in your Instabug dashboard. If you want to use your own app, you will have to make two small changes in order to integrate the SDK:
Inside the build.gradle file, add Instabug as a dependency and then synchronize the gradle files. If you have downloaded the sample app, you should still check that it is requesting the latest version of the dependency, which at the time of writing this tutorial was 4.5.0.
compile 'com.instabug.library:instabug:4.5.0'
The next step would be to initialize Instabug inside your application's onCreate() method using the following code:
new Instabug.Builder(this, "APP_TOKEN")
.setInvocationEvent(InstabugInvocationEvent.SHAKE)
.build();
You can find your own APP_TOKEN by selecting the SDK tab from your Instabug dashboard.
After performing these two steps, you are now ready to squash any bugs that your users might report.
Instabug will automatically add some permissions to the AndroidManifest.xml file. This will enable the app to get information about the network and WiFi connection. Other permissions will allow the users to attach images, videos, and audio recording with their bug reports.
The process of integrating the SDK is just as simple for iOS and hybrid apps. The documentation is easy to follow and lists all the steps in great detail.
One very important feature of Instabug is that it goes to great lengths to make sure that users feel very comfortable with any app that integrates Instabug and nothing seems out of place. This is achieved by allowing you to control everything from invocation and popups to the design and locale of the SDK.
By default, the SDK will automatically use the current locale of the device. However, you can change it to any other language using the setLocale() method. If you are using the sample app provided by Instabug, you will notice that the locale has been set to German. Upon inspecting the SampleApplication.java file, you will find the following code inside it:
These are three different methods to specify the language that you want the SDK to use. The locale value set by the first two calls to setLocale() is overridden by the last method, and that's why you see the SDK instructions in German. If you want the SDK to use the current locale of the device, you can remove all these lines from the SampleApplication.java file. Similar instructions to specify the locale are also available for iOS as well as hybrid apps.
By default, the SDK is invoked when your users shake their device. Instabug also allows you to control how the SDK should be invoked. This can be helpful when you are using the shake feature for some other purpose in your app.
You can also set the SDK to be invoked on taking a normal screenshot, a two-finger swipe from right to left, or tapping a floating button shown above your app's UI. The documentation provides a lot of extra information on how to change the invocation event at runtime or manually invoke the SDK for Android, iOS and hybrid apps.
You can also control the design of the SDK to make the bug reporting experience as seamless as possible. Instabug allows you to choose from a light or dark theme, specify the primary color for the UI elements of the SDK, and control the position of the floating button used to invoke the SDK.
The documentation provides all the steps to control the design of the SDK in great detail for Android, iOS and hybrid apps. The Instabug team has made sure that your users don't feel that anything about the bug reporting mechanism is out of place when using your app.
Occasionally, Instabug will also use popups either to help users with something or to collect user data. Since popups are a huge part of the overall user experience, Instabug allows you to have full control over popups, including when they should appear or if a popup should appear at all.
For instance, the intro message popup only appears when the length of the first user session goes over 30 seconds. If the user invokes the SDK before that, the popup does not appear at all. You also have the option to disable the introductory popup entirely using the following line:
Instabug.setIntroMessageEnabled(false);
You can also show the popup at a particular time using the following line:
Instabug.showIntroMessage();
The documentation provides more details on how to control other popups inside Android, iOS and hybrid apps.
Instabug Sends a Lot of Data With Bug Reports
Whenever users send a crash or bug report from your application, Instabug collects as much relevant information about the bug or crash as it can. All these detailed reports allow you to debug the problem with ease and get rid of any bugs and crashes that occur in your app quickly.
Instabug allows you to identify the users who sent the bug report so that you can communicate with them about the bug. By default, Instabug asks users for their email when they submit a bug report. However, you can also set the email and the username yourself. Once you have set these values, the SDK won't ask for an email again when a user submits a bug report.
You can also attach custom attributes to your users, and these attributes will be shown to you in the Instabug dashboard. These custom attributes, as well as other attributes like the OS level and screen dimensions that are set automatically by Instabug, can help you filter bug reports that occur only on devices with a particular OS level, etc. The Instabug documentation covers this feature in great detail for Android, iOS and hybrid apps.
As you can see in the above screenshot, I have blurred out the text inside the first button. All users who want to submit a bug report will be able to blur out sensitive information from any screenshot they send to you.
Having access to different kinds of logs can go a long way when figuring out how to get rid of a bug in your app. For this reason, Instabug sends all kinds of logs with a bug report. You get access to the console logs as well as the network logs. The network log provides information about each request along with the responses.
In addition to the logs, Instabug also records all the steps a user took to help you reproduce the bug on your end. You can also log custom user events with each bug report. One important thing to keep in mind is that the number of log entries and user steps sent back to you by Instabug is capped at 1,000 each.
Quite a few bugs that your users report will be related to a variety of UI problems. They might not be seeing a button, or the app menu might be missing a few options. In any case, having access to the view hierarchy can be very helpful when you want to figure out what is wrong.
There are a few things which might hide a button or other elements from the users. These reasons include the button hiding behind some other UI element, or it being outside the parent view's bounds, etc. The underlying reason can be easily figured out by taking a look at the view hierarchy generated by Instabug.
As you can see in the above image, clicking on any UI element will highlight it on the right side and show important information like the width, height, and padding applied to it. This can be crucial when debugging UI-related problems. Instabug also allows you to zoom in and out of the view hierarchy.
You also get to control the number of layers that you want Instabug to render and the spacing between different layers. This way, you will be able to easily debug user interfaces with hundreds of elements.
All this information that Instabug collects will be sufficient to get rid of almost any bug that you might encounter. At the same time, having access to a screenshot from the app, a video recording of the bug, or a voice note by the user which describes the problem they are facing can provide additional context that might be missing in some complex bugs.
Instabug always sends a screenshot which was taken when the SDK was invoked. In addition to that, users can attach extra screenshots from the app, an image from a gallery, a voice note, or a screen recording. Users are allowed to attach up to three files, each of which can have a size of 5 MB.
An Overview of the Instabug Dashboard
The Instabug dashboard provides you a list of all the team members and an activity log to give you a rough idea of what everyone on the team has been up to lately. One section of the dashboard also specifies the number of new, in-progress and closed bugs and crash reports. Similarly, you can also see the number of new and closed chats as well as published and paused surveys.
Besides the tools to help you get rid of any bugs in your app efficiently, Instabug offers a lot of other features as well. Efficient communication with the users or beta testers of your app can sometimes become difficult. Not all users will be willing to continuously switch between your app and emails in order to communicate their problems. The Instabug team understands this, and that's why it offers in-app chat. You will now be able to talk with your users directly from within the app.
A lot of users vent their frustration with an app in the reviews on app stores. Those one-star ratings and reviews can drive away potential new users of your app. Giving your users the option of in-app chat can keep your ratings up while helping you quickly answer all their queries. You can turn any chat that you had with a user into a bug report and forward it to the development team for quick resolution. Instabug also allows you to send actionable messages to your users, like a link to download the latest version of your app.
You can also create surveys and send them out to different users of your app using Instabug. The surveys can have a text field or multiple choices for your users to choose from. You can run a survey whenever you want to gather information about the general usage patterns of your app or ask users for suggestions on how to improve the app. You will be able to access the responses to every survey that you have published in the dashboard.
You might not want to send a survey that you create to all the users of your app. Instabug allows you to send targeted surveys only meant for a subset of your users. In other words, a survey would only be sent to users who meet a certain condition. If no condition is specified, the survey will be sent to everyone by default.
More Features and Third-Party Integrations
There is a good chance that your company uses more than just one tool when developing apps. For instance, you might be using Slack for collaborating and effective communication within the team and Trello for project management. Similarly, you might be using JIRA as an issue tracking tool and Zendesk for offering customer support. Instabug offers integrations for all these tools and allows you to keep track of everything from one central location.
With so many integrations, you can keep using all the tools that you have been using for development without adding unnecessary friction. As an example, your users and beta testers can keep ling bug reports and provide feedback directly from within the app, but if you have integrated JIRA with Instabug, all these bug reports will be automatically logged into your JIRA project.
In this section, I have only named a few services which can be integrated with Instabug. You can access the whole list of tools for integration with Instabug in the Integrations Hub.
Developing a mobile app is a continuous process. You will be regularly releasing new versions of your app, and each version will have its own set of features and bugs. Some steps in the app development cycle are repetitive, and you can automate them to save valuable time.
With so many companies competing to get more and more apps published on the app stores, it is crucial for you and your team to move fast and save time. Keeping this need in mind, Instabug allows you to automate a lot of tasks.
For example, you can set up Instabug to automatically notify users when the bug they reported has been fixed. Similarly, you might want to thank users whenever they report a new bug or assign bug reports that fall under certain categories to a specific developer or team.
Letting Instabug handle all these repetitive tasks for you can save some of your valuable time, which can be utilized to do something more productive.
Final Thoughts
Instabug is an amazing tool which can help users provide great in-app feedback and bug reports from directly within your app. As you saw in the tutorial, integrating Instabug within existing apps just takes a few minutes. Instabug does all the heavy lifting for you.
The service is focused entirely on improving the experience of your users. Keeping this goal in mind, Instabug gives you access to a lot of options which control everything from the ways users can invoke the SDK to the primary color of different UI elements added by the SDK. This significantly improves the user experience, and nothing about the bug reporting mechanism seems out of place.
Besides improving the user experience, Instabug also makes the debugging process a lot easier. The detailed bug reports with all the logs and information about the user device can significantly cut the time taken for the development team to get rid of any bugs in the app. The ability to integrate with so many tools also makes it easier to use different third-party services together, without adding unnecessary friction.
In short, Instabug has everything that you might need in order to allow your beta testers and developers to work together and squash all the bugs in your apps. You should certainly sign up for Instabug to see if it makes a difference to your mobile app development process. The service is free to use for the first 14 days, so you don't have anything to lose by giving it a try.
Use of smartphones has seen some explosive growth over the last decade. For this reason, many companies and independent developers see publishing an app on either Google Play or Apple's app store as a good way of making money.
This, in turn, has flooded the play store and app store with over 2 million apps each. There are hundreds and thousands of apps that all do about the same thing. While some of those apps have been developed by amateurs, others have been created by professionals. The cut-throat competition makes it very hard for new apps to become popular.
To stand out, you have to provide a great experience that compels users to give you nothing less than a five-star rating. Not only that, but you also have to get rid of bugs in your app as quickly as possible, so that any frustrated users don't end up giving you poor ratings.
This requires you to have access to detailed bug reports, which would be possible only if you knew the steps a user followed and many other device-related details and logs. Having access to network logs and allowing users or beta testers to file bug reports directly from within the app would significantly speed up the process.
In this tutorial, you will learn about a tool called Instabug, which does exactly that.
Getting Started With Instabug
The good news is that you don't need to do a lot of work to follow this tutorial and see how Instabug works. As you will see, the integration process is quite easy. You can use Instabug free of cost for the first 14 days, so you can just go and sign up for the service.
After signing up, you will be asked to integrate the SDK into your app. I will be using an Android app to show you all the features of Instabug, but you can easily integrate it with native iOS apps or Hybrid apps.
You don't even need to have an app at first. Just download the sample app provided by Instabug and start seeing bug reports in your Instabug dashboard. If you want to use your own app, you will have to make two small changes in order to integrate the SDK:
Inside the build.gradle file, add Instabug as a dependency and then synchronize the gradle files. If you have downloaded the sample app, you should still check that it is requesting the latest version of the dependency, which at the time of writing this tutorial was 4.5.0.
compile 'com.instabug.library:instabug:4.5.0'
The next step would be to initialize Instabug inside your application's onCreate() method using the following code:
new Instabug.Builder(this, "APP_TOKEN")
.setInvocationEvent(InstabugInvocationEvent.SHAKE)
.build();
You can find your own APP_TOKEN by selecting the SDK tab from your Instabug dashboard.
After performing these two steps, you are now ready to squash any bugs that your users might report.
Instabug will automatically add some permissions to the AndroidManifest.xml file. This will enable the app to get information about the network and WiFi connection. Other permissions will allow the users to attach images, videos, and audio recording with their bug reports.
The process of integrating the SDK is just as simple for iOS and hybrid apps. The documentation is easy to follow and lists all the steps in great detail.
One very important feature of Instabug is that it goes to great lengths to make sure that users feel very comfortable with any app that integrates Instabug and nothing seems out of place. This is achieved by allowing you to control everything from invocation and popups to the design and locale of the SDK.
By default, the SDK will automatically use the current locale of the device. However, you can change it to any other language using the setLocale() method. If you are using the sample app provided by Instabug, you will notice that the locale has been set to German. Upon inspecting the SampleApplication.java file, you will find the following code inside it:
These are three different methods to specify the language that you want the SDK to use. The locale value set by the first two calls to setLocale() is overridden by the last method, and that's why you see the SDK instructions in German. If you want the SDK to use the current locale of the device, you can remove all these lines from the SampleApplication.java file. Similar instructions to specify the locale are also available for iOS as well as hybrid apps.
By default, the SDK is invoked when your users shake their device. Instabug also allows you to control how the SDK should be invoked. This can be helpful when you are using the shake feature for some other purpose in your app.
You can also set the SDK to be invoked on taking a normal screenshot, a two-finger swipe from right to left, or tapping a floating button shown above your app's UI. The documentation provides a lot of extra information on how to change the invocation event at runtime or manually invoke the SDK for Android, iOS and hybrid apps.
You can also control the design of the SDK to make the bug reporting experience as seamless as possible. Instabug allows you to choose from a light or dark theme, specify the primary color for the UI elements of the SDK, and control the position of the floating button used to invoke the SDK.
The documentation provides all the steps to control the design of the SDK in great detail for Android, iOS and hybrid apps. The Instabug team has made sure that your users don't feel that anything about the bug reporting mechanism is out of place when using your app.
Occasionally, Instabug will also use popups either to help users with something or to collect user data. Since popups are a huge part of the overall user experience, Instabug allows you to have full control over popups, including when they should appear or if a popup should appear at all.
For instance, the intro message popup only appears when the length of the first user session goes over 30 seconds. If the user invokes the SDK before that, the popup does not appear at all. You also have the option to disable the introductory popup entirely using the following line:
Instabug.setIntroMessageEnabled(false);
You can also show the popup at a particular time using the following line:
Instabug.showIntroMessage();
The documentation provides more details on how to control other popups inside Android, iOS and hybrid apps.
Instabug Sends a Lot of Data With Bug Reports
Whenever users send a crash or bug report from your application, Instabug collects as much relevant information about the bug or crash as it can. All these detailed reports allow you to debug the problem with ease and get rid of any bugs and crashes that occur in your app quickly.
Instabug allows you to identify the users who sent the bug report so that you can communicate with them about the bug. By default, Instabug asks users for their email when they submit a bug report. However, you can also set the email and the username yourself. Once you have set these values, the SDK won't ask for an email again when a user submits a bug report.
You can also attach custom attributes to your users, and these attributes will be shown to you in the Instabug dashboard. These custom attributes, as well as other attributes like the OS level and screen dimensions that are set automatically by Instabug, can help you filter bug reports that occur only on devices with a particular OS level, etc. The Instabug documentation covers this feature in great detail for Android, iOS and hybrid apps.
As you can see in the above screenshot, I have blurred out the text inside the first button. All users who want to submit a bug report will be able to blur out sensitive information from any screenshot they send to you.
Having access to different kinds of logs can go a long way when figuring out how to get rid of a bug in your app. For this reason, Instabug sends all kinds of logs with a bug report. You get access to the console logs as well as the network logs. The network log provides information about each request along with the responses.
In addition to the logs, Instabug also records all the steps a user took to help you reproduce the bug on your end. You can also log custom user events with each bug report. One important thing to keep in mind is that the number of log entries and user steps sent back to you by Instabug is capped at 1,000 each.
Quite a few bugs that your users report will be related to a variety of UI problems. They might not be seeing a button, or the app menu might be missing a few options. In any case, having access to the view hierarchy can be very helpful when you want to figure out what is wrong.
There are a few things which might hide a button or other elements from the users. These reasons include the button hiding behind some other UI element, or it being outside the parent view's bounds, etc. The underlying reason can be easily figured out by taking a look at the view hierarchy generated by Instabug.
As you can see in the above image, clicking on any UI element will highlight it on the right side and show important information like the width, height, and padding applied to it. This can be crucial when debugging UI-related problems. Instabug also allows you to zoom in and out of the view hierarchy.
You also get to control the number of layers that you want Instabug to render and the spacing between different layers. This way, you will be able to easily debug user interfaces with hundreds of elements.
All this information that Instabug collects will be sufficient to get rid of almost any bug that you might encounter. At the same time, having access to a screenshot from the app, a video recording of the bug, or a voice note by the user which describes the problem they are facing can provide additional context that might be missing in some complex bugs.
Instabug always sends a screenshot which was taken when the SDK was invoked. In addition to that, users can attach extra screenshots from the app, an image from a gallery, a voice note, or a screen recording. Users are allowed to attach up to three files, each of which can have a size of 5 MB.
An Overview of the Instabug Dashboard
The Instabug dashboard provides you a list of all the team members and an activity log to give you a rough idea of what everyone on the team has been up to lately. One section of the dashboard also specifies the number of new, in-progress and closed bugs and crash reports. Similarly, you can also see the number of new and closed chats as well as published and paused surveys.
Besides the tools to help you get rid of any bugs in your app efficiently, Instabug offers a lot of other features as well. Efficient communication with the users or beta testers of your app can sometimes become difficult. Not all users will be willing to continuously switch between your app and emails in order to communicate their problems. The Instabug team understands this, and that's why it offers in-app chat. You will now be able to talk with your users directly from within the app.
A lot of users vent their frustration with an app in the reviews on app stores. Those one-star ratings and reviews can drive away potential new users of your app. Giving your users the option of in-app chat can keep your ratings up while helping you quickly answer all their queries. You can turn any chat that you had with a user into a bug report and forward it to the development team for quick resolution. Instabug also allows you to send actionable messages to your users, like a link to download the latest version of your app.
You can also create surveys and send them out to different users of your app using Instabug. The surveys can have a text field or multiple choices for your users to choose from. You can run a survey whenever you want to gather information about the general usage patterns of your app or ask users for suggestions on how to improve the app. You will be able to access the responses to every survey that you have published in the dashboard.
You might not want to send a survey that you create to all the users of your app. Instabug allows you to send targeted surveys only meant for a subset of your users. In other words, a survey would only be sent to users who meet a certain condition. If no condition is specified, the survey will be sent to everyone by default.
More Features and Third-Party Integrations
There is a good chance that your company uses more than just one tool when developing apps. For instance, you might be using Slack for collaborating and effective communication within the team and Trello for project management. Similarly, you might be using JIRA as an issue tracking tool and Zendesk for offering customer support. Instabug offers integrations for all these tools and allows you to keep track of everything from one central location.
With so many integrations, you can keep using all the tools that you have been using for development without adding unnecessary friction. As an example, your users and beta testers can keep ling bug reports and provide feedback directly from within the app, but if you have integrated JIRA with Instabug, all these bug reports will be automatically logged into your JIRA project.
In this section, I have only named a few services which can be integrated with Instabug. You can access the whole list of tools for integration with Instabug in the Integrations Hub.
Developing a mobile app is a continuous process. You will be regularly releasing new versions of your app, and each version will have its own set of features and bugs. Some steps in the app development cycle are repetitive, and you can automate them to save valuable time.
With so many companies competing to get more and more apps published on the app stores, it is crucial for you and your team to move fast and save time. Keeping this need in mind, Instabug allows you to automate a lot of tasks.
For example, you can set up Instabug to automatically notify users when the bug they reported has been fixed. Similarly, you might want to thank users whenever they report a new bug or assign bug reports that fall under certain categories to a specific developer or team.
Letting Instabug handle all these repetitive tasks for you can save some of your valuable time, which can be utilized to do something more productive.
Final Thoughts
Instabug is an amazing tool which can help users provide great in-app feedback and bug reports from directly within your app. As you saw in the tutorial, integrating Instabug within existing apps just takes a few minutes. Instabug does all the heavy lifting for you.
The service is focused entirely on improving the experience of your users. Keeping this goal in mind, Instabug gives you access to a lot of options which control everything from the ways users can invoke the SDK to the primary color of different UI elements added by the SDK. This significantly improves the user experience, and nothing about the bug reporting mechanism seems out of place.
Besides improving the user experience, Instabug also makes the debugging process a lot easier. The detailed bug reports with all the logs and information about the user device can significantly cut the time taken for the development team to get rid of any bugs in the app. The ability to integrate with so many tools also makes it easier to use different third-party services together, without adding unnecessary friction.
In short, Instabug has everything that you might need in order to allow your beta testers and developers to work together and squash all the bugs in your apps. You should certainly sign up for Instabug to see if it makes a difference to your mobile app development process. The service is free to use for the first 14 days, so you don't have anything to lose by giving it a try.
Bottom navigation bars make it easy to explore and switch between top-level views in a single tap.
Tapping on a bottom navigation icon takes you directly to the associated view or refreshes the currently active view.
According to the official material design guidelines for the bottom navigation bar, it should be used when your app has:
three to five top-level destinations
destinations requiring direct access
An example of a popular app that implements the bottom navigation bar is the Google+ Android app from Google, which uses it to navigate to different destinations of the app. You can see this yourself by downloading the Google+ app from Google Play store (if you don't already have it on your device). The following screenshot is from the Google+ app displaying a bottom navigation bar.
In this post, you'll learn how to display menu items inside a bottom navigation bar in Android. We'll use the BottomNavigationView API to perform the task. For an additional bonus, you'll also learn how to use the Android Studio templates feature to quickly bootstrap your project with a bottom navigation bar.
A sample project (in Kotlin) for this tutorial can be found on our GitHub repo so you can easily follow along.
Fire up Android Studio and create a new project (you can name it BottomNavigationDemo) with an empty activity called MainActivity. Make sure to also check the Include Kotlin support check box.
2. Adding the BottomNavigationView
To begin using BottomNavigationView in your project, make sure you import the design support and also the Android support artifact. Add these to your module's build.gradle file to import them.
Also, visit your res/layout/activlty_main.xml file to include the BottomNavigationView widget. This layout file also includes a ConstraintLayout and a FrameLayout. Note that the FrameLayout will serve as a container or placeholder for the different fragments that will be placed on it anytime a menu item is clicked in the bottom navigation bar. (We'll get to that shortly.)
Here we have created a BottomNavigationView widget with the id navigationView. The official documentation says that:
BottomNavigationView represents a standard bottom navigation bar for application. It is an implementation of material design bottom navigation.
Bottom navigation bars make it easy for users to explore and switch between top-level views in a single tap. It should be used when application has three to five top-level destinations.
The important attributes you should take note of that were added to our BottomNavigationView are:
app:itemBackground: this attribute sets the background of our menu items to the given resource. In our case, we just gave it a background colour.
app:itemIconTint: sets the tint which is applied to our menu items' icons.
app:itemTextColor: sets the colours to use for the different states (normal, selected, focused, etc.) of the menu item text.
To include the menu items for the bottom navigation bar, we can use the attribute app:menu with a value that points to a menu resource file.
Here we have defined a Menu using the <menu> which serves as a container for menu items. An <item> creates a MenuItem, which represents a single item in a menu. As you can see, each <item> has an id, an icon, and a title.
3. Initialization of Components
Next, we are going to initialize an instance of BottomNavigationView. Initialization is going to happen inside onCreate() in MainActivity.kt.
import android.os.Bundle
import android.support.design.widget.BottomNavigationView
import android.support.v7.app.AppCompatActivity
class MainActivity : AppCompatActivity() {
lateinit var toolbar: ActionBar
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
toolbar = supportActionBar!!
val bottomNavigation: BottomNavigationView = findViewById(R.id.navigationView)
}
}
4. Testing the App
Now we can run the app!
As you can see, our bottom navigation bar is showing at the bottom of the app screen. Nothing will happen if you click on any of the navigation items there—we're going to handle that part in the next section.
5. Configuring Click Events
Now, let's see how to configure click events for each of the items in the bottom navigation bar. Remember that clicking on any item in there should take the user to a new destination in the app.
Here we called the method setOnNavigationItemSelectedListener. Here is what it does according to the official documentation:
Set a listener that will be notified when a bottom navigation item is selected.
We used the when expression to perform different actions based on the menu item that was clicked—the menu item ids serve as constants for the when expression. At the end of each when branch, we return true.
We then pass our mOnNavigationItemSelectedListener listener to setOnNavigationItemSelectedListener() as an argument.
Be aware that there is another similar method called setOnNavigationItemReselectedListener, which will be notified when the currently selected bottom navigation item is reselected.
Next, we are going to create the different pages (or Fragments) for each of the menu items in the navigation drawer so that when a menu item is clicked or tapped, it displays a different Android Fragment or page.
6. Creating the Fragments (Pages)
We'll start with the SongsFragment.kt class, and you should follow a similar process for the remaining two fragment classes—AlbumsFragment.kt and ArtistsFragment.kt.
When any of the menu items is clicked, we open the corresponding Fragment and also change the action bar title.
private val mOnNavigationItemSelectedListener = BottomNavigationView.OnNavigationItemSelectedListener { item ->
when (item.itemId) {
R.id.navigation_songs -> {
toolbar.title = "Songs"
val songsFragment = SongsFragment.newInstance()
openFragment(songsFragment)
return@OnNavigationItemSelectedListener true
}
R.id.navigation_albums -> {
toolbar.title = "Albums"
val albumsFragment = AlbumsFragment.newInstance()
openFragment(albumsFragment)
return@OnNavigationItemSelectedListener true
}
R.id.navigation_artists -> {
toolbar.title = "Artists"
val artistsFragment = ArtistsFragment.newInstance()
openFragment(artistsFragment)
return@OnNavigationItemSelectedListener true
}
}
false
}
private fun openFragment(fragment: Fragment) {
val transaction = supportFragmentManager.beginTransaction()
transaction.replace(R.id.container, fragment)
transaction.addToBackStack(null)
transaction.commit()
}
Here we're using a method called openFragment() that simply uses the FragmentTransaction to add our fragment to the UI.
Now run the project again to see how it all works!
Anytime you click on any menu item, it will take the user to a new Fragment.
Note that when we have more than four menu items in the bottom navigation bar—i.e. in BottomNavigationView—then the Android system automatically enables shift mode. In this mode, when any of the menu items is clicked, the other items on the right or left of the clicked item are shifted.
8. Bonus: Using Android Studio Templates
Now that you have learnt about the APIs involved to create a bottom navigation bar from scratch in Android, I'll show you a shortcut that will make it faster next time. You can simply use a template instead of coding a navigation bar from scratch.
Android Studio provides code templates that follow the Android design and development best practices. These existing code templates (available in Java and Kotlin) can help you quickly kick-start your project. One such template can be used to create a bottom navigation bar.
To use this handy feature for a new project, first fire up Android Studio.
Enter the application name and click the Next button. You can leave the defaults as they are in the Target Android Devices dialog.
Click the Next button again.
In the Add an Activity to Mobile dialog, select Bottom Navigation Activity. Click the Next button again after that.
In the last dialog, you can rename the Activity, or change its layout name or title if you want. Finally, click the Finish button to accept all configurations.
Android Studio has now helped us to create a project with a bottom navigation activity. Really cool!
You are strongly advised to explore the code generated.
In an existing Android Studio project, to use this template, simply go to File > New > Activity > Bottom Navigation Activity.
Note that the templates that come included with Android Studio are good for simple layouts and making basic apps, but if you want to really kick-start your app, you might consider some of the app templates available from Envato Market.
They’re a huge time saver for experienced developers, helping them to cut through the slog of creating an app from scratch and focus their talents instead on the unique and customised parts of creating a new app.
Conclusion
In this tutorial, you learned how to create a bottom navigation bar in Android using the BottomNavigationView API from scratch. We also explored how to easily and quickly use the Android Studio templates to create a bottom navigation activity.
Beyond enabling iOS developers to easily store data on the cloud, as well as authenticating users through their robust SDKs, Firebase also provides a convenient storage solution for media. Firebase Storage allows developers to store and retrieve audio, image, and video files on the cloud. That is, Firebase Storage exposes a set of SDKs to give developers the ability to manage their user-generated content assets alongside its sibling product, the Firebase Realtime Database, which stores user text content.
However, Firebase Storage is more than just a storage container for rich media assets. It assists developers by offering offline synchronization for users and their devices, queuing and resuming images and videos when the user goes off and back online. This works similarly to how Firebase Realtime Database orchestrates synchronization of user data to the back-end.
This tutorial will expose you to the Firebase Storage SDKs, to help you manage your app’s media assets—such as image, audio and video files—storing them remotely on the cloud, and retrieving them throughout your app. In this tutorial, you will learn how to:
set up your app for Firebase Storage
create and work with storage references
upload media to Firebase Storage
download media from Firebase Storage
Assumed Knowledge
This tutorial assumes you have had some exposure to Firebase, and a background developing with Swift and Xcode. It is also important that you have gone through our Get Started With Firebase Authentication for iOS tutorial first as you will need to authenticate your users prior to accessing much of the Firebase Storage functionality, including asset paths.
What Is Firebase Storage?
As a developer, you can use the Firebase Realtime Database to access and interact with your Firebase Storage bucket in a serverless fashion, without the need to create and host your own servers. Firebase Storage makes use of local on-device caching to store assets when offline and serve assets when the user gets back online, with the local data automatically synchronized.
Developers no longer have to deal with the complexities of synchronizing data and content through Apple’s standard iOS networking libraries, and having to deal with multiple scenarios that may cause transfer interruptions.
In fact, the Firebase products recognize that real-world mobile users face the prospect of interrupted or low-signal situations. Being able to synchronize data on-device for later transfer makes for a much better user experience, whilst saving developers a lot of work.
Security is also paramount with Firebase Storage, as it is with the rest of the Firebase suite of products. This means developers can restrict access to storage items by authenticating users using Firebase Authentication, which is built on top of an imperative security model that allows control of access to paths, files, and metadata on a role-by-role basis.
Finally, apps hosted on Firebase Storage benefit from a Google infrastructure that scales as the user base grows. We will explore some of these concepts later in the tutorial, but to start with, let’s go through setting up your app to work with Firebase. Then we'll take a look at Storage Reference pointers.
Set Up the Project
If you have worked with Firebase before, a lot of this should be familiar to you. Otherwise, you will need to create an account in Firebase, and follow the instructions in the Set Up the Project section of the article Get Started With Firebase Authentication for iOS.
You can download the complete source code for this project by entering the following in terminal:
Once your environment is ready, we can move on to taking a look at storage references, starting with how to create a reference pointer.
Creating & Working With Storage References
Using Firebase Storage, you can interact with your own cloud bucket, which represents a filesystem of your stored images and videos. You use what is called a storage reference to a particular path or file within a path, within the filesystem that you then give your app access to, so that you can interact with it by transferring data.
Having a pointer to a path or file within the path allows you to upload, download, update, or delete that path. To create a reference, you simply create an instance of Storage.storage(), as follows:
let store = Storage.storage()
let storeRef = store.reference()
You now have a reference to the root of your filesystem hierarchy, and you can set the structure for your bucket as you wish, for example by creating a folder structure.
To access files and paths in your bucket, call the child() method, as follows:
let userProfilesRef = storeRef.child("images/profiles")
...
let logoRef = storeRef.child("images/logo.png")
References are a shorthand for the complete Firebase path to your file via your bucket, instead of entering your entire Firebase bucket URL path. Besides the child() method, you can also navigate your hierarchy using the root() and parent() methods, and you can chain these methods, as you will see below:
let userProfilesRef = logoRef.parent()?.child("profiles")
As you can see, we would get the same results for userProfilesRef as we had in the previous block of code. The great thing about references is that they are extremely lightweight, so you can have as many references within your app instance as you wish, without affecting the performance of your app.
Now that you understand the fundamental aspects of working with Firebase Storage references, let’s move on to uploading and downloading files from your bucket.
Uploading Media to Firebase Storage
The simplest way to upload a file is to pass in an NSData representation of its contents in memory:
let uploadUserProfileTask = userProfilesRef.child("\(userID).png").putData(data, metadata: nil) { (metadata, error) in
guard let metadata = metadata else {
print("Error occurred: \(error)")
return
}
print("download url for profile is \(metadata.downloadURL)")
}
You can manage your uploads in progress, by controlling when to commence, pause, resume, and cancel your uploads. You can also listen for the subsequent events that are triggered, which are:
pause
resume
cancel
Referencing the uploadUserProfileTask we used earlier, you can control your uploads using the following methods:
You can also monitor a transfer in progress by simply setting an observer to the task instance object:
let progressObserver = uploadUserProfileTask.observe(.progress) { snapshot in
let percentComplete = 100.0 * Double(snapshot.progress!.completedUnitCount)
/ Double(snapshot.progress!.totalUnitCount)
print(percentComplete)
}
Let’s see how you would approach downloading images or videos from the storage bucket.
Downloading Media From Firebase Storage
To be able to download and present your images, you start off as you did with uploading, and declare a reference pointer to your designated path. Then commence download using the closure function dataWithMaxSize:completion::
logoRef.getData(maxSize: 1 * 1024 * 1024) { data, error in
if let error = error {
print("Error \(error)")
} else {
let logoImage = UIImage(data: data!)
}
}
If you make use of FirebaseUI, you can simply have FirebaseUI manage the downloading, caching, and displaying of images for you in an even simpler way:
For information on implementing FirebaseUI, refer to the FirebaseUI documentation.
Managing downloads works in a similar manner to managing and controlling uploads. Here's an example:
let downloadTask = storageRef.child("images/logo.jpg").write(toFile: localFile)
// Pause the download
downloadTask.pause()
// Resume the download
downloadTask.resume()
// Cancel the download
downloadTask.cancel()
You can also designate an observer as we did for uploads, to track the progress of the download transfer in real time:
let progressObserverDownload = downloadTask.observe(.progress) { snapshot in
let percentComplete = 100.0 * Double(snapshot.progress!.completedUnitCount)
/ Double(snapshot.progress!.totalUnitCount)
print(percentComplete)
}
Armed with an overview of how to work with references and how to upload and download assets from your bucket, you are now ready to take a look at how to implement Firebase Storage for our sample project: FirebaseDo.
The Sample FirebaseDo Project
You should have cloned the FirebaseDo app by now, so go ahead and build and run the project. You will see that all it does is authenticate users, using either phone or email:
Our goal is to incrementally improve on the app’s functionality, so that once our users authenticate successfully, they will be able to upload a profile photo. Most of our work will be in the HomeViewController and its Associated Storyboard. Let’s address the HomeViewController file first.
The HomeViewController
Before we jump into the methods of this controller, we'll need to add the UIImagePickerControllerDelegate protocol to our class so that we can work with its delegate methods. We will also need to add a picker instance so that our users can choose a photo from their library.
class HomeViewController: UIViewController, FUIAuthDelegate, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
@IBOutlet weak var myImageView: UIImageView!
let picker = UIImagePickerController()
...
fileprivate(set) var auth:Auth?
fileprivate(set) var authUI: FUIAuth? //only set internally but get externally
fileprivate(set) var authStateListenerHandle: AuthStateDidChangeListenerHandle?
Add the following towards the end of the viewDidLoad() method:
We are going to implement the refreshProfileImage() method, which will be called to download the image we have displayed in our ViewController. We are going to first assert that the user is indeed authenticated, before creating a reference that will retrieve the user’s profile image from the /images/user_id/profile_photo.jpg path within our bucket. Finally, we'll update our image view with the image retrieved.
func refreshProfileImage(){
if let user = Auth.auth().currentUser{
let store = Storage.storage()
let storeRef = store.reference().child("images/\(user.uid)/profile_photo.jpg")
storeRef.getData(maxSize: 1 * 1024 * 1024) { data, error in
if let error = error {
print("error: \(error.localizedDescription)")
} else {
let image = UIImage(data: data!)
self.myImageView.image = image
}
}
}else{
print("You should be logged in")
self.loginAction(sender: self)
return
}
}
Next, we create an @IBAction method for the photo library button which we will shortly connect to from our storyboard:
Finally, we add two delegate methods for our UIImagePickerController, to handle when the user cancels the UIImagePicker, as well as handling the selected image:
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) {
dismiss(animated: true, completion: nil)
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
self.dismiss(animated: true, completion: nil)
let profileImageFromPicker = info[UIImagePickerControllerOriginalImage] as! UIImage
let metadata = StorageMetadata()
metadata.contentType = "image/jpeg"
let imageData: Data = UIImageJPEGRepresentation(profileImageFromPicker, 0.5)!
let store = Storage.storage()
let user = Auth.auth().currentUser
if let user = user{
let storeRef = store.reference().child("images/\(user.uid)/profile_photo.jpg")
ASProgressHud.showHUDAddedTo(self.view, animated: true, type: .default)
let _ = storeRef.putData(imageData, metadata: metadata) { (metadata, error) in
ASProgressHud.hideHUDForView(self.view, animated: true)
guard let _ = metadata else {
print("error occurred: \(error.debugDescription)")
return
}
self.myImageView.image = profileImageFromPicker
}
}
}
Once the user selects an image, we dismiss the picker but keep a reference to the selected image. Next, we create a StorageMetadata() instance so that we can tell Firebase we are going to upload a JPEG file.
As we did in the refreshProfileImage() method, we are going to assert that the user is authenticated, and then create a reference to the images path where we want to store our user’s profile. Using the putData() method, we then asynchronously upload our image to the designated bucket location, before setting our image view to the newly selected image.
Before we can build and run our app, we will need to add the appropriate controls to our storyboard.
Storyboard
Within our main storyboard, add an image view with a placeholder image that will represent the user’s current profile, and then drag to associate the image view with the one we have declared as an @IBOutlet in our HomeViewController class. Next, add a toolbar with a button that you will use as an @IBAction to call the libraryAction() method we created earlier in the HomeViewController.
Your Storyboard should now resemble the following:
Absent of any errors, you can go ahead and build and run your app once again, and authenticate by either creating a new user or using an existing user's set of credentials.
You will then be presented with the HomeViewController, where you will select the + button to add an image from your device or simulator’s photo library. Once you’ve chosen a photo, it will upload it to the Firebase bucket. You can confirm that it has successfully uploaded by going to the Storage tab of your Firebase Console, as shown below:
If you stop and re-run the app in Xcode, you should also see the image you last uploaded reappear, further confirming we have successfully uploaded and downloaded using Firebase Storage.
Conclusion
This tutorial demonstrated how to easy it is to add asynchronous asset storage and management to an existing Firebase app with just a few lines of code. This provides you with a convenient way to manage your app’s assets, while letting you handle offline synchronization elegantly and conveniently.
Firebase Storage is an obvious choice for iOS developers who are already within the Firebase ecosystem. It provides developers with the security of an imperative security model provided by Firebase Authentication, as well as the capability provided by the Firebase Realtime Database.
While you're here, check out some of our other posts on iOS app development!
Bottom navigation bars make it easy to explore and switch between top-level views in a single tap.
Tapping on a bottom navigation icon takes you directly to the associated view or refreshes the currently active view.
According to the official material design guidelines for the bottom navigation bar, it should be used when your app has:
three to five top-level destinations
destinations requiring direct access
An example of a popular app that implements the bottom navigation bar is the Google+ Android app from Google, which uses it to navigate to different destinations of the app. You can see this yourself by downloading the Google+ app from Google Play store (if you don't already have it on your device). The following screenshot is from the Google+ app displaying a bottom navigation bar.
In this post, you'll learn how to display menu items inside a bottom navigation bar in Android. We'll use the BottomNavigationView API to perform the task. For an additional bonus, you'll also learn how to use the Android Studio templates feature to quickly bootstrap your project with a bottom navigation bar.
A sample project (in Kotlin) for this tutorial can be found on our GitHub repo so you can easily follow along.
Fire up Android Studio and create a new project (you can name it BottomNavigationDemo) with an empty activity called MainActivity. Make sure to also check the Include Kotlin support check box.
2. Adding the BottomNavigationView
To begin using BottomNavigationView in your project, make sure you import the design support and also the Android support artifact. Add these to your module's build.gradle file to import them.
Also, visit your res/layout/activlty_main.xml file to include the BottomNavigationView widget. This layout file also includes a ConstraintLayout and a FrameLayout. Note that the FrameLayout will serve as a container or placeholder for the different fragments that will be placed on it anytime a menu item is clicked in the bottom navigation bar. (We'll get to that shortly.)
Here we have created a BottomNavigationView widget with the id navigationView. The official documentation says that:
BottomNavigationView represents a standard bottom navigation bar for application. It is an implementation of material design bottom navigation.
Bottom navigation bars make it easy for users to explore and switch between top-level views in a single tap. It should be used when application has three to five top-level destinations.
The important attributes you should take note of that were added to our BottomNavigationView are:
app:itemBackground: this attribute sets the background of our menu items to the given resource. In our case, we just gave it a background colour.
app:itemIconTint: sets the tint which is applied to our menu items' icons.
app:itemTextColor: sets the colours to use for the different states (normal, selected, focused, etc.) of the menu item text.
To include the menu items for the bottom navigation bar, we can use the attribute app:menu with a value that points to a menu resource file.
Here we have defined a Menu using the <menu> which serves as a container for menu items. An <item> creates a MenuItem, which represents a single item in a menu. As you can see, each <item> has an id, an icon, and a title.
3. Initialization of Components
Next, we are going to initialize an instance of BottomNavigationView. Initialization is going to happen inside onCreate() in MainActivity.kt.
import android.os.Bundle
import android.support.design.widget.BottomNavigationView
import android.support.v7.app.AppCompatActivity
class MainActivity : AppCompatActivity() {
lateinit var toolbar: ActionBar
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
toolbar = supportActionBar!!
val bottomNavigation: BottomNavigationView = findViewById(R.id.navigationView)
}
}
4. Testing the App
Now we can run the app!
As you can see, our bottom navigation bar is showing at the bottom of the app screen. Nothing will happen if you click on any of the navigation items there—we're going to handle that part in the next section.
5. Configuring Click Events
Now, let's see how to configure click events for each of the items in the bottom navigation bar. Remember that clicking on any item in there should take the user to a new destination in the app.
Here we called the method setOnNavigationItemSelectedListener. Here is what it does according to the official documentation:
Set a listener that will be notified when a bottom navigation item is selected.
We used the when expression to perform different actions based on the menu item that was clicked—the menu item ids serve as constants for the when expression. At the end of each when branch, we return true.
We then pass our mOnNavigationItemSelectedListener listener to setOnNavigationItemSelectedListener() as an argument.
Be aware that there is another similar method called setOnNavigationItemReselectedListener, which will be notified when the currently selected bottom navigation item is reselected.
Next, we are going to create the different pages (or Fragments) for each of the menu items in the navigation drawer so that when a menu item is clicked or tapped, it displays a different Android Fragment or page.
6. Creating the Fragments (Pages)
We'll start with the SongsFragment.kt class, and you should follow a similar process for the remaining two fragment classes—AlbumsFragment.kt and ArtistsFragment.kt.
When any of the menu items is clicked, we open the corresponding Fragment and also change the action bar title.
private val mOnNavigationItemSelectedListener = BottomNavigationView.OnNavigationItemSelectedListener { item ->
when (item.itemId) {
R.id.navigation_songs -> {
toolbar.title = "Songs"
val songsFragment = SongsFragment.newInstance()
openFragment(songsFragment)
return@OnNavigationItemSelectedListener true
}
R.id.navigation_albums -> {
toolbar.title = "Albums"
val albumsFragment = AlbumsFragment.newInstance()
openFragment(albumsFragment)
return@OnNavigationItemSelectedListener true
}
R.id.navigation_artists -> {
toolbar.title = "Artists"
val artistsFragment = ArtistsFragment.newInstance()
openFragment(artistsFragment)
return@OnNavigationItemSelectedListener true
}
}
false
}
private fun openFragment(fragment: Fragment) {
val transaction = supportFragmentManager.beginTransaction()
transaction.replace(R.id.container, fragment)
transaction.addToBackStack(null)
transaction.commit()
}
Here we're using a method called openFragment() that simply uses the FragmentTransaction to add our fragment to the UI.
Now run the project again to see how it all works!
Anytime you click on any menu item, it will take the user to a new Fragment.
Note that when we have more than four menu items in the bottom navigation bar—i.e. in BottomNavigationView—then the Android system automatically enables shift mode. In this mode, when any of the menu items is clicked, the other items on the right or left of the clicked item are shifted.
8. Bonus: Using Android Studio Templates
Now that you have learnt about the APIs involved to create a bottom navigation bar from scratch in Android, I'll show you a shortcut that will make it faster next time. You can simply use a template instead of coding a navigation bar from scratch.
Android Studio provides code templates that follow the Android design and development best practices. These existing code templates (available in Java and Kotlin) can help you quickly kick-start your project. One such template can be used to create a bottom navigation bar.
To use this handy feature for a new project, first fire up Android Studio.
Enter the application name and click the Next button. You can leave the defaults as they are in the Target Android Devices dialog.
Click the Next button again.
In the Add an Activity to Mobile dialog, select Bottom Navigation Activity. Click the Next button again after that.
In the last dialog, you can rename the Activity, or change its layout name or title if you want. Finally, click the Finish button to accept all configurations.
Android Studio has now helped us to create a project with a bottom navigation activity. Really cool!
You are strongly advised to explore the code generated.
In an existing Android Studio project, to use this template, simply go to File > New > Activity > Bottom Navigation Activity.
Note that the templates that come included with Android Studio are good for simple layouts and making basic apps, but if you want to really kick-start your app, you might consider some of the app templates available from Envato Market.
They’re a huge time saver for experienced developers, helping them to cut through the slog of creating an app from scratch and focus their talents instead on the unique and customised parts of creating a new app.
Conclusion
In this tutorial, you learned how to create a bottom navigation bar in Android using the BottomNavigationView API from scratch. We also explored how to easily and quickly use the Android Studio templates to create a bottom navigation activity.
Technology is rapidly changing, and today's latest device becomes completely outdated in a flash. In such a dynamic and emerging tech environment, developers might get somewhat confused. We all want to find the best avenues to channel our learning and development efforts.
Many technologists believe that the golden age of smartphones is
nearing its end. A whole new batch of hi-tech wearable devices are
about to replace smartphones in the near future. What would these technologies
and devices look like? Wearables can range in size from watches through to smart glasses and smart rings. Every day, they are becoming
smaller in size and are boosting their performance too.
These devices have already started to redefine user interaction
patterns, user behaviour, and sometimes even the user's lifestyle. In
this article, you'll learn about the latest emerging wearable device
platforms for which you could develop apps.
1. Smartwatches
Although smartwatches are the obvious next step, it took a while for them to challenge the dominant position of smartphones. That was mainly due to interaction problems associated with the small screen size and poor battery life.
Most smartwatches started out as a "companion" to a smartphone. However, things are changing really fast. Several standalone smartwatches that don't need to pair with a smartphone are available now. The latest innovations have enhanced and refined the user interaction and user experience to a great extent.
If you want to develop for a smartwatch platform, you might consider one of the following.
1.1 Android Wear
Android Wear is one of the leading smartwatch platforms. Its latest version, Android Wear 2.0, has eliminated many problems of the previous versions and comes with some really cool features. The smartwatches powered by this platform can now function as standalone devices, meaning that they don't have to rely on a smartphone anymore. The UI has become more refined, more readable, and easier to navigate than ever before. It also features a full QWERTY keyboard so that the user can type on the device itself. The coolest thing is that it can directly access the Google Play store, without relying on a smartphone for the connection.
So it's clear that Android Wear offers great opportunities for developers to explore. You could start developing either watch faces or other
Android Wear apps. You are free to experiment with a broad range of
supported sensors, including Bluetooth, WiFi, LTE, GPS, NFC, and the heart
rate sensor. Android Wear 2.0 now even supports third-party input
methods. So if you've been thinking of developing an innovative soft
keyboard for the watch, this may be the time for it.
1.2 Apple Watch
Apple Watch's latest model, Series 3, has two variants. Only one has the optional LTE cellular connectivity, but both are equipped with onboard GPS. While one of them permits a standalone mode, both are optimized to be used together with a smartphone. You could come up with some weird and innovative app ideas to make use of the built-in GPS, LTE connectivity, altimeter and Siri, the voice assistant.
Apple has also recently released watchOS 4, the latest version of its wearable OS. They have also fixed some bugs and issues, especially related to LTE connectivity. You don't have to worry much about linking it to the external world and can focus more on your app development business.
1.3 Samsung Gear S Series
While Tizen isn't as popular as Android or iOS among smartphone users, it's really a big name in the smartwatch sector. The Samsung Gear smartwatch, powered by Tizen OS, has the second largest market share in this sector.
You should take into account the unique features of the watch when you're developing apps. These features include speech to text, GPS, in-app purchases, and a special UI element called Widget that provides easy access to frequently used tasks. The latest version is the Gear S3, and that too can be used as a standalone device. You just need to use Tizen Studio to make your app idea a reality.
2. Fitness & Activity Trackers
Some of the tech vendors have created wearables that cater to the specific needs of certain niches, rather than trying to build miniature smartphones. One such niche market consists of athletes, sportspeople, and outdoor adventure lovers.
Wearables catering to this sector don't try to replace their users' smartphones. Instead, they're more likely to replace their regular wristwatches. These devices provide more accurate feedback on the users' sports activities. Most of them have stripped down versions of OS and hardware features, so that they can focus on their specialized job. That has enabled them to dramatically improve their battery life too.
2.1 Fitbit
Fitbit is an activity tracker that doubles as a watch. It pairs with a smartphone to provide comprehensive reports of the user's workout performance. Users can set daily goals, such as the number of calories to burn, and then view their progress towards them over a period of time. Developing apps for the Fitbit is a breeze if you are experienced in JavaScript, CSS, and SVG. Fitbit OS is a clever piece of software that makes this fitness tracker really exciting and easy to use.
You could use Fitbit Studio, the official IDE for the Fitbit OS, to develop apps and clock faces. If you want to distribute your apps, you could do that too, by uploading them to the App Gallery.
2.2 Garmin
Garmin has a series of wearables aimed at athletes, workout addicts, and outdoor adventure lovers. Almost all of their devices come ready with GPS, heart rate monitor, and dozens of useful sensors and features.
You can use Garmin's Connect IQ SDK and select from a number of APIs such as Health API, Connect API and various others to develop apps. The developer website is full of additional tools and resources such as GIS software, digital map datasets, and a lot more.
2.3 Samsung Gear Fit
While Gear S is a full-featured smartwatch, Gear Fit is more inclined towards the fitness tracker market. You could use the same tools that you used for the Gear S, but the only thing is that you need to be aware of this one's unique role as a fitness tracker.
3. Smart Glasses
Smart glasses offer a unique experience that's completely different from all the hand-worn wearables. They don't isolate the user as much from the real world as VR headsets do, but rather mix with reality. They normally do this by adding a layer of information on top of the user's view of the real world.
These smart glasses can be used in a variety of situations ranging from general consumer apps to highly technical and industrial tasks. One great example is for equipment repairs. The technician could see the actual equipment through the smart glasses, and an AR app would provide more assistance by identifying all the parts the technician touches and displaying information about them in an overlay.
3.1 Epson Moverio
Epson was a pioneer in this sector, and its latest Moverio models include Moverio BT-300, BT-350, and BT-2000 Pro versions. Although they don't support cellular data connectivity, you can use the built-in Wi-Fi or Bluetooth to connect them to any supported device.
Epson's smart glasses use Android OS and are packed with a number of sensors such as GPS, geomagnetic sensor, accelerometer, gyroscope, and illumination sensor. Now you too can become an AR app publisher, by registering on their developer website and using the Moverio SDK plus the optimized tools to create apps.
3.2 Daqri
Unlike Epson, which is more inclined towards consumers, Daqri focuses on enterprise clients. Its smart glasses and smart helmet are useful in a number of industrial and medical applications. The platform can provide real-time data visualization, job instructions, and remote expert assistance. You can download its SDK as an extension for Unity, and immediately start coding.
3.3 Sony SmartEyeglass
Sony SmartEyeglass is primarily aimed at developers who want to experiment with the latest AR apps. It has an embedded camera, microphone, accelerometer, gyroscope, compass, and brightness sensors. A layer of monochrome green text appears on its binocular see-through lenses, providing the user with information.
These glasses need to be paired with a smartphone to function. Sony has also released an SDK, enabling developers to experiment with some cool app ideas.
3.4 Vuzix
Vuzix has a range of wearable products including smart glasses, smart sunglasses, and video headphones. They can cater to both consumers and professionals alike, and can cover a vast array of applications such as industrial, medical, retail, remote help desk, and a lot more. Be sure to register on the developer website and start developing after downloading the SDK.
4. VR Headsets
While VR headsets might make the wearer look ridiculous to others, they offer a truly immersive user experience that no other wearable can provide. Currently, the most promising applications are entertainment apps such as games, but there are many areas that could be explored.
One such area is training simulations. Employers might make use of VR headsets and simulated virtual tasks to be accomplished by trainees. This helps reduce cost and get effective feedback on performance too. Immersive educational content is also sure to become a killer app.
Current VR headsets are packed with tons of sensors related to spatial, magnetic, optical and thermal data of the user's environment. They are capable of presenting the wearer with a real-world view, virtual-world view, or a combination of both. This makes them really powerful devices that can have a great impact on everyone.
Here are some of the most popular VR headset platforms.
4.1 HTC Vive
HTC's Vive VR headset comes with a complete set of accessories that help create realistic VR spaces called play areas. Users need to set up their headsets together with the accessories and define the play areas before using them. If you want to publish VR apps, just register as a developer on Vive's app store, Viveport, and start building new worlds using the Viveport SDK. The SDK supports several OS and game engine platforms, so you can choose the version that fits you best. You can publish your VR games on the popular SteamVR app store too.
4.2 Oculus Rift
Another leading platform in the VR space, Oculus offers a great VR experience and user interaction. Its SDK is also available in several packages, including the Platform SDK and utilities for Unity game engine. The popular game engine Unreal also offers built-in support for developing Oculus apps.
4.3 Samsung Gear VR
Samsung Gear VR is not a standalone VR headset, but just a device holder for compatible smartphones that provide a VR experience. Samsung has produced it in collaboration with Oculus, and it supports Samsung's flagship handsets. The headset device acts as the controller, providing the optics as well as head tracking mechanisms, etc. It connects to the smartphone via USB and must be calibrated before use. Although setting up the development environment can be somewhat time-consuming, it's worth it to become a developer for one of the latest tech platforms available today.
4.4 Google Daydream View
Daydream View is also similar to Gear VR, but this one's clad in fabric and weighs much less than Samsung's device. Google has recently started collaborating with Lenovo on building a standalone VR headset, but that's yet to arrive. For the meantime, Google offers four SDKs for developers so that they can choose Android, Unity, Unreal, or iOS as their main development platform.
4.5 Sony PlayStation VR
Sony's VR headset also competes head to head with other popular platforms such as Oculus, but becoming a developer is relatively difficult. You need to be physically located in certain select countries, have a static IP address to access developer support, and submit your employer's tax ID number. This means that only corporate developers are allowed.
4.6 Windows Mixed Reality
While most of the other VR headsets rely on external sensors for motion tracking, Windows Mixed Reality headsets have all the sensors built in. So there's no need to set up spaces such as play areas (as in the case of HTC Vive), but this means the tracking capabilities are relatively limited.
There are several vendors that manufacture Windows Mixed Reality headsets. Lenovo, HP, Samsung, Acer, and Dell are among them. There are tons of articles and other resources on the Microsoft HoloLens developer website to help you get up and running.
4.7 Google Cardboard
This the most low-tech item in an ultra hi-tech list: Google's attempt to bring the VR experience to the masses at a very low cost. The Google Cardboard device is actually made of cardboard, and holds a smartphone and plastic lenses to provide a VR experience. Google has also published a complete manufacturer kit so that developers can start building everything from scratch. The only thing that they need to buy is a smartphone and the lenses.
5. Smart Rings
Smart rings are perhaps the next evolution of smartwatches. As wearables become smaller and smaller, interacting with them poses a real challenge for developers. However, with the help of some unconventional interaction methods, such as gesture control, these can be solved. Below are two of the latest smart ring platforms.
5.1 Talon
Talon rings can connect to a range of devices from smartphones to tablets and smart TVs. Not only that, they can also be used as remote controls to switch on or off smart lights. A whole new world opens up when you think of the apps that can be created. You can control other devices or enhance the user experience of other apps. So just register as a Talon developer and request SDK access. You'll be creating amazing, futuristic apps in no time.
5.2 NFC Ring
The NFC Ring has a broad range of applications such as access control, data transfer, and payments. Really creative developers are free to come up with the coolest ideas and convert them into apps using the SDKs and other tools.
Conclusion
In this article we took a brief look at the latest and emerging wearable development platforms that are going to replace smartphones in the future. The technology is changing so fast that it's impossible to tell which one of these will actually dominate. So get out there and start experimenting!
While you're here, check out some of our other posts on smartwatch and wearable app development.
We also have complete courses that will show you how to create a wearable app from start to finish for the popular Android Wear or Apple watchOS platforms.
Technology is rapidly changing, and today's latest device becomes completely outdated in a flash. In such a dynamic and emerging tech environment, developers might get somewhat confused. We all want to find the best avenues to channel our learning and development efforts.
Many technologists believe that the golden age of smartphones is
nearing its end. A whole new batch of hi-tech wearable devices are
about to replace smartphones in the near future. What would these technologies
and devices look like? Wearables can range in size from watches through to smart glasses and smart rings. Every day, they are becoming
smaller in size and are boosting their performance too.
These devices have already started to redefine user interaction
patterns, user behaviour, and sometimes even the user's lifestyle. In
this article, you'll learn about the latest emerging wearable device
platforms for which you could develop apps.
1. Smartwatches
Although smartwatches are the obvious next step, it took a while for them to challenge the dominant position of smartphones. That was mainly due to interaction problems associated with the small screen size and poor battery life.
Most smartwatches started out as a "companion" to a smartphone. However, things are changing really fast. Several standalone smartwatches that don't need to pair with a smartphone are available now. The latest innovations have enhanced and refined the user interaction and user experience to a great extent.
If you want to develop for a smartwatch platform, you might consider one of the following.
1.1 Android Wear
Android Wear is one of the leading smartwatch platforms. Its latest version, Android Wear 2.0, has eliminated many problems of the previous versions and comes with some really cool features. The smartwatches powered by this platform can now function as standalone devices, meaning that they don't have to rely on a smartphone anymore. The UI has become more refined, more readable, and easier to navigate than ever before. It also features a full QWERTY keyboard so that the user can type on the device itself. The coolest thing is that it can directly access the Google Play store, without relying on a smartphone for the connection.
So it's clear that Android Wear offers great opportunities for developers to explore. You could start developing either watch faces or other
Android Wear apps. You are free to experiment with a broad range of
supported sensors, including Bluetooth, WiFi, LTE, GPS, NFC, and the heart
rate sensor. Android Wear 2.0 now even supports third-party input
methods. So if you've been thinking of developing an innovative soft
keyboard for the watch, this may be the time for it.
1.2 Apple Watch
Apple Watch's latest model, Series 3, has two variants. Only one has the optional LTE cellular connectivity, but both are equipped with onboard GPS. While one of them permits a standalone mode, both are optimized to be used together with a smartphone. You could come up with some weird and innovative app ideas to make use of the built-in GPS, LTE connectivity, altimeter and Siri, the voice assistant.
Apple has also recently released watchOS 4, the latest version of its wearable OS. They have also fixed some bugs and issues, especially related to LTE connectivity. You don't have to worry much about linking it to the external world and can focus more on your app development business.
1.3 Samsung Gear S Series
While Tizen isn't as popular as Android or iOS among smartphone users, it's really a big name in the smartwatch sector. The Samsung Gear smartwatch, powered by Tizen OS, has the second largest market share in this sector.
You should take into account the unique features of the watch when you're developing apps. These features include speech to text, GPS, in-app purchases, and a special UI element called Widget that provides easy access to frequently used tasks. The latest version is the Gear S3, and that too can be used as a standalone device. You just need to use Tizen Studio to make your app idea a reality.
2. Fitness & Activity Trackers
Some of the tech vendors have created wearables that cater to the specific needs of certain niches, rather than trying to build miniature smartphones. One such niche market consists of athletes, sportspeople, and outdoor adventure lovers.
Wearables catering to this sector don't try to replace their users' smartphones. Instead, they're more likely to replace their regular wristwatches. These devices provide more accurate feedback on the users' sports activities. Most of them have stripped down versions of OS and hardware features, so that they can focus on their specialized job. That has enabled them to dramatically improve their battery life too.
2.1 Fitbit
Fitbit is an activity tracker that doubles as a watch. It pairs with a smartphone to provide comprehensive reports of the user's workout performance. Users can set daily goals, such as the number of calories to burn, and then view their progress towards them over a period of time. Developing apps for the Fitbit is a breeze if you are experienced in JavaScript, CSS, and SVG. Fitbit OS is a clever piece of software that makes this fitness tracker really exciting and easy to use.
You could use Fitbit Studio, the official IDE for the Fitbit OS, to develop apps and clock faces. If you want to distribute your apps, you could do that too, by uploading them to the App Gallery.
2.2 Garmin
Garmin has a series of wearables aimed at athletes, workout addicts, and outdoor adventure lovers. Almost all of their devices come ready with GPS, heart rate monitor, and dozens of useful sensors and features.
You can use Garmin's Connect IQ SDK and select from a number of APIs such as Health API, Connect API and various others to develop apps. The developer website is full of additional tools and resources such as GIS software, digital map datasets, and a lot more.
2.3 Samsung Gear Fit
While Gear S is a full-featured smartwatch, Gear Fit is more inclined towards the fitness tracker market. You could use the same tools that you used for the Gear S, but the only thing is that you need to be aware of this one's unique role as a fitness tracker.
3. Smart Glasses
Smart glasses offer a unique experience that's completely different from all the hand-worn wearables. They don't isolate the user as much from the real world as VR headsets do, but rather mix with reality. They normally do this by adding a layer of information on top of the user's view of the real world.
These smart glasses can be used in a variety of situations ranging from general consumer apps to highly technical and industrial tasks. One great example is for equipment repairs. The technician could see the actual equipment through the smart glasses, and an AR app would provide more assistance by identifying all the parts the technician touches and displaying information about them in an overlay.
3.1 Epson Moverio
Epson was a pioneer in this sector, and its latest Moverio models include Moverio BT-300, BT-350, and BT-2000 Pro versions. Although they don't support cellular data connectivity, you can use the built-in Wi-Fi or Bluetooth to connect them to any supported device.
Epson's smart glasses use Android OS and are packed with a number of sensors such as GPS, geomagnetic sensor, accelerometer, gyroscope, and illumination sensor. Now you too can become an AR app publisher, by registering on their developer website and using the Moverio SDK plus the optimized tools to create apps.
3.2 Daqri
Unlike Epson, which is more inclined towards consumers, Daqri focuses on enterprise clients. Its smart glasses and smart helmet are useful in a number of industrial and medical applications. The platform can provide real-time data visualization, job instructions, and remote expert assistance. You can download its SDK as an extension for Unity, and immediately start coding.
3.3 Sony SmartEyeglass
Sony SmartEyeglass is primarily aimed at developers who want to experiment with the latest AR apps. It has an embedded camera, microphone, accelerometer, gyroscope, compass, and brightness sensors. A layer of monochrome green text appears on its binocular see-through lenses, providing the user with information.
These glasses need to be paired with a smartphone to function. Sony has also released an SDK, enabling developers to experiment with some cool app ideas.
3.4 Vuzix
Vuzix has a range of wearable products including smart glasses, smart sunglasses, and video headphones. They can cater to both consumers and professionals alike, and can cover a vast array of applications such as industrial, medical, retail, remote help desk, and a lot more. Be sure to register on the developer website and start developing after downloading the SDK.
4. VR Headsets
While VR headsets might make the wearer look ridiculous to others, they offer a truly immersive user experience that no other wearable can provide. Currently, the most promising applications are entertainment apps such as games, but there are many areas that could be explored.
One such area is training simulations. Employers might make use of VR headsets and simulated virtual tasks to be accomplished by trainees. This helps reduce cost and get effective feedback on performance too. Immersive educational content is also sure to become a killer app.
Current VR headsets are packed with tons of sensors related to spatial, magnetic, optical and thermal data of the user's environment. They are capable of presenting the wearer with a real-world view, virtual-world view, or a combination of both. This makes them really powerful devices that can have a great impact on everyone.
Here are some of the most popular VR headset platforms.
4.1 HTC Vive
HTC's Vive VR headset comes with a complete set of accessories that help create realistic VR spaces called play areas. Users need to set up their headsets together with the accessories and define the play areas before using them. If you want to publish VR apps, just register as a developer on Vive's app store, Viveport, and start building new worlds using the Viveport SDK. The SDK supports several OS and game engine platforms, so you can choose the version that fits you best. You can publish your VR games on the popular SteamVR app store too.
4.2 Oculus Rift
Another leading platform in the VR space, Oculus offers a great VR experience and user interaction. Its SDK is also available in several packages, including the Platform SDK and utilities for Unity game engine. The popular game engine Unreal also offers built-in support for developing Oculus apps.
4.3 Samsung Gear VR
Samsung Gear VR is not a standalone VR headset, but just a device holder for compatible smartphones that provide a VR experience. Samsung has produced it in collaboration with Oculus, and it supports Samsung's flagship handsets. The headset device acts as the controller, providing the optics as well as head tracking mechanisms, etc. It connects to the smartphone via USB and must be calibrated before use. Although setting up the development environment can be somewhat time-consuming, it's worth it to become a developer for one of the latest tech platforms available today.
4.4 Google Daydream View
Daydream View is also similar to Gear VR, but this one's clad in fabric and weighs much less than Samsung's device. Google has recently started collaborating with Lenovo on building a standalone VR headset, but that's yet to arrive. For the meantime, Google offers four SDKs for developers so that they can choose Android, Unity, Unreal, or iOS as their main development platform.
4.5 Sony PlayStation VR
Sony's VR headset also competes head to head with other popular platforms such as Oculus, but becoming a developer is relatively difficult. You need to be physically located in certain select countries, have a static IP address to access developer support, and submit your employer's tax ID number. This means that only corporate developers are allowed.
4.6 Windows Mixed Reality
While most of the other VR headsets rely on external sensors for motion tracking, Windows Mixed Reality headsets have all the sensors built in. So there's no need to set up spaces such as play areas (as in the case of HTC Vive), but this means the tracking capabilities are relatively limited.
There are several vendors that manufacture Windows Mixed Reality headsets. Lenovo, HP, Samsung, Acer, and Dell are among them. There are tons of articles and other resources on the Microsoft HoloLens developer website to help you get up and running.
4.7 Google Cardboard
This the most low-tech item in an ultra hi-tech list: Google's attempt to bring the VR experience to the masses at a very low cost. The Google Cardboard device is actually made of cardboard, and holds a smartphone and plastic lenses to provide a VR experience. Google has also published a complete manufacturer kit so that developers can start building everything from scratch. The only thing that they need to buy is a smartphone and the lenses.
5. Smart Rings
Smart rings are perhaps the next evolution of smartwatches. As wearables become smaller and smaller, interacting with them poses a real challenge for developers. However, with the help of some unconventional interaction methods, such as gesture control, these can be solved. Below are two of the latest smart ring platforms.
5.1 Talon
Talon rings can connect to a range of devices from smartphones to tablets and smart TVs. Not only that, they can also be used as remote controls to switch on or off smart lights. A whole new world opens up when you think of the apps that can be created. You can control other devices or enhance the user experience of other apps. So just register as a Talon developer and request SDK access. You'll be creating amazing, futuristic apps in no time.
5.2 NFC Ring
The NFC Ring has a broad range of applications such as access control, data transfer, and payments. Really creative developers are free to come up with the coolest ideas and convert them into apps using the SDKs and other tools.
Conclusion
In this article we took a brief look at the latest and emerging wearable development platforms that are going to replace smartphones in the future. The technology is changing so fast that it's impossible to tell which one of these will actually dominate. So get out there and start experimenting!
While you're here, check out some of our other posts on smartwatch and wearable app development.
We also have complete courses that will show you how to create a wearable app from start to finish for the popular Android Wear or Apple watchOS platforms.
Processing is one of the most powerful libraries available today for creating visual algorithmic artworks, both 2D and 3D. It is open source, based on Java, and comes with a large variety of functions geared to making drawing and painting with code both fun and easy.
By using Processing's core library in your Android apps, you can create high-performance graphics and animations without having to deal with Android's OpenGL or Canvas APIs. Usually, you won't even have to bother with low-level tasks such as managing threads, creating render loops, or maintaining frame rates.
In this tutorial, I'll show you how to add Processing to an Android app and introduce you to some of its most useful features.
1. Project Setup
Processing comes with its own integrated development environment, which can be used to create Android apps. However, if you are an Android app developer already, I'm sure you'd prefer to use Android Studio instead. So go ahead and download the latest version of Processing's Android Mode.
Inside the ZIP file you downloaded, you'll find a file named processing-core.zip. Extract it and rename it to processing-core.jar using the command line or your operating system's file explorer.
Lastly, add the JAR file as one of the dependencies of your Android Studio project by placing it inside the app module's libs folder.
You now have everything you need to start using Processing.
2. Creating a Canvas
Almost all of Processing's core functionality is available through the PApplet class, which essentially serves as a canvas you can draw on. By extending it, you get easy access to all the methods it has to offer.
val myCanvas = object: PApplet() {
// More code here
}
To configure the canvas, you must override its settings() method. Inside the method, you can specify two important configuration details: the desired dimensions of the canvas and whether it should use the 2D or the 3D rendering engine. For now, let's make the canvas as large as the device's screen and use the default 2D rendering engine. To do so, you can call the fullScreen() shortcut method.
override fun settings() {
fullScreen()
}
The settings() method is a special method that's needed only when you are not using Processing's own IDE. I suggest you don't add any more code to it.
If you to want to initialize any variables or change any drawing-related parameters—such as the background color of the canvas or the number of frames it should display per second—you should use the setup() method instead. For example, the following code shows you how to use the background() method to change the background color of the canvas to red:
override fun setup() {
background(Color.parseColor("#FF8A80")) // Material Red A100
}
3. Displaying the Canvas
Because the canvas is still not a part of any activity, you won't be able to see it when you run your app. To display the canvas, you must first create a container for it inside your activity's layout XML file. A LinearLayout widget or a FrameLayout widget can be the container.
A PApplet instance cannot be directly added to the container you created. You must place it inside a PFragment instance first and then call the setView() method of the PFragment instance to associate it with the container. The following code shows you how to do so:
// Place canvas inside fragment
val myFragment = PFragment(myCanvas)
// Display fragment
myFragment.setView(canvas_container, this)
At this point, if you run the app, you should be able to see a blank canvas covering the entire screen of your device.
4. Drawing Simple Shapes
Now that you are able to see the canvas, let's start drawing. To draw inside the canvas, you must override the draw() method of the PApplet subclass you created earlier.
override fun draw() {
// More code here
}
It might not seem obvious immediately, but Processing, by default, tries to call the draw() method as often as 60 times every second, as long as the canvas is being displayed. That means that you can easily create both still graphics and animations with it.
Processing has a variety of intuitively named methods that allow you to draw geometric primitives such as points, lines, ellipses, and rectangles. For instance, the rect() method draws a rectangle, and the ellipse() method draws an ellipse. Both the rect() and ellipse() methods expect similar arguments: the X and Y coordinates of the shape, its width, and its height.
The following code shows you how to draw a rectangle and an ellipse:
rect(100f, 100f, 500f, 300f) // Top-left corner is at (100,100)
ellipse(350f, 650f, 500f, 400f) // Center is at (350,650)
Many of the methods are overloaded too, allowing you to slightly modify the basic shapes. For example, by passing a fifth parameter to the rect() method, a corner radius, you can draw a rounded rectangle.
If you run your app now, you should see something like this:
If you want to change the border color of the shapes, you can call the stroke() method and pass the desired color as an argument to it. Similarly, if you want to fill the shapes with a specific color, you can call the fill() method. Both the methods should be called before you actually draw the shape.
The following code draws a blue triangle with a green outline:
If you run your app now, you'll be able to see the blue triangle, but you'll also notice that every other shape has also turned blue.
If the reason isn't obvious to you already, remember that the draw() method is called repeatedly. That means that any configuration parameter you change during a draw cycle will have an effect on subsequent draw cycles. So in order to make sure that all your shapes are drawn with the right colors, it is a good idea to always explicitly specify the color of every shape you draw, right before you draw it.
For instance, by adding the following code at the beginning of the draw() method, you can make the other shapes white again.
// Set the fill and stroke to white and black
// before drawing the rectangles and ellipses
stroke(Color.BLACK)
fill(Color.WHITE)
At this point, the canvas will look like this:
5. Handling Touch Events
With Processing, handling touch events is extremely easy. You don't need any event handlers whatsoever. All you need to do is check if a boolean variable named mousePressed is true to know when the user is touching the screen. Once you've confirmed that the user is touching the screen, you can use the mouseX and mouseY variables to determine the X and Y coordinates of the touch.
For example, the following code draws a new rectangle wherever the user touches the canvas.
// Check if user is touching the canvas
if(mousePressed) {
// Specify fill and stroke colors
stroke(Color.RED)
fill(Color.YELLOW)
// Draw rectangle
rect(mouseX.toFloat(), mouseY.toFloat(), 100f, 100f)
}
If you run your app now and drag your finger across the screen, you should see a lot of yellow rectangles being drawn.
Before we move on, here's a quick tip: if at any point you wish to clear the canvas, you can simply call the background() method again.
background(Color.parseColor("#FF8A80")) // Material Red A100
6. Working With Pixels
There's only so far you can get with simple primitives. If you are interested in creating intricate and complex artwork, you'll probably need access to the individual pixels of the canvas.
By calling the loadPixels() method, you can load the colors of all the pixels of the canvas into an array named pixels. By modifying the contents of the array, you can very efficiently modify the contents of the canvas. Lastly, once you've finished modifying the array, you should remember to call the updatePixels() method to render the new set of pixels.
Note that the pixels array is a one-dimensional, integer array whose size is equal to the product of the width and height of the canvas. Because the canvas is two-dimensional, converting the X and Y coordinates of a pixel into a valid index of the array involves use of the following formula:
// index = xCoordinate + yCoordinate * widthOfCanvas
The following example code, which sets the color of each pixel of the canvas based on its X and Y coordinates, should help you better understand how to use the pixels array:
override fun draw() {
loadPixels() // Load array
// loop through all valid coordinates
for(y in 0..height - 1) {
for(x in 0..width - 1) {
// Calculate index
val index = x + y * width
// Update pixel at index with a new color
pixels[index] = Color.rgb(x % 255, y % 255, (x*y) % 255)
}
}
// Render pixels with new colors
updatePixels()
}
The Color.rgb() method you see above converts individual red, green, and blue values to an integer that represents a single color value that the Processing framework understands. Feel free to modify the arguments you pass to it, but do make sure that they are always within the range 0 to 255.
If you choose to run the code without any modifications, you should see a pattern that looks like this:
Conclusion
You now know how to create 2D graphics using the Processing language. With the skills you learned today, you can not only make your Android apps more appealing, but also create full-fledged games from scratch. You are limited only by your creativity!
To learn more about Processing, I suggest you spend some time browsing through the official reference pages.
And while you're here, check out some of our other posts on Android app development!
Processing is one of the most powerful libraries available today for creating visual algorithmic artworks, both 2D and 3D. It is open source, based on Java, and comes with a large variety of functions geared to making drawing and painting with code both fun and easy.
By using Processing's core library in your Android apps, you can create high-performance graphics and animations without having to deal with Android's OpenGL or Canvas APIs. Usually, you won't even have to bother with low-level tasks such as managing threads, creating render loops, or maintaining frame rates.
In this tutorial, I'll show you how to add Processing to an Android app and introduce you to some of its most useful features.
1. Project Setup
Processing comes with its own integrated development environment, which can be used to create Android apps. However, if you are an Android app developer already, I'm sure you'd prefer to use Android Studio instead. So go ahead and download the latest version of Processing's Android Mode.
Inside the ZIP file you downloaded, you'll find a file named processing-core.zip. Extract it and rename it to processing-core.jar using the command line or your operating system's file explorer.
Lastly, add the JAR file as one of the dependencies of your Android Studio project by placing it inside the app module's libs folder.
You now have everything you need to start using Processing.
2. Creating a Canvas
Almost all of Processing's core functionality is available through the PApplet class, which essentially serves as a canvas you can draw on. By extending it, you get easy access to all the methods it has to offer.
val myCanvas = object: PApplet() {
// More code here
}
To configure the canvas, you must override its settings() method. Inside the method, you can specify two important configuration details: the desired dimensions of the canvas and whether it should use the 2D or the 3D rendering engine. For now, let's make the canvas as large as the device's screen and use the default 2D rendering engine. To do so, you can call the fullScreen() shortcut method.
override fun settings() {
fullScreen()
}
The settings() method is a special method that's needed only when you are not using Processing's own IDE. I suggest you don't add any more code to it.
If you to want to initialize any variables or change any drawing-related parameters—such as the background color of the canvas or the number of frames it should display per second—you should use the setup() method instead. For example, the following code shows you how to use the background() method to change the background color of the canvas to red:
override fun setup() {
background(Color.parseColor("#FF8A80")) // Material Red A100
}
3. Displaying the Canvas
Because the canvas is still not a part of any activity, you won't be able to see it when you run your app. To display the canvas, you must first create a container for it inside your activity's layout XML file. A LinearLayout widget or a FrameLayout widget can be the container.
A PApplet instance cannot be directly added to the container you created. You must place it inside a PFragment instance first and then call the setView() method of the PFragment instance to associate it with the container. The following code shows you how to do so:
// Place canvas inside fragment
val myFragment = PFragment(myCanvas)
// Display fragment
myFragment.setView(canvas_container, this)
At this point, if you run the app, you should be able to see a blank canvas covering the entire screen of your device.
4. Drawing Simple Shapes
Now that you are able to see the canvas, let's start drawing. To draw inside the canvas, you must override the draw() method of the PApplet subclass you created earlier.
override fun draw() {
// More code here
}
It might not seem obvious immediately, but Processing, by default, tries to call the draw() method as often as 60 times every second, as long as the canvas is being displayed. That means that you can easily create both still graphics and animations with it.
Processing has a variety of intuitively named methods that allow you to draw geometric primitives such as points, lines, ellipses, and rectangles. For instance, the rect() method draws a rectangle, and the ellipse() method draws an ellipse. Both the rect() and ellipse() methods expect similar arguments: the X and Y coordinates of the shape, its width, and its height.
The following code shows you how to draw a rectangle and an ellipse:
rect(100f, 100f, 500f, 300f) // Top-left corner is at (100,100)
ellipse(350f, 650f, 500f, 400f) // Center is at (350,650)
Many of the methods are overloaded too, allowing you to slightly modify the basic shapes. For example, by passing a fifth parameter to the rect() method, a corner radius, you can draw a rounded rectangle.
If you run your app now, you should see something like this:
If you want to change the border color of the shapes, you can call the stroke() method and pass the desired color as an argument to it. Similarly, if you want to fill the shapes with a specific color, you can call the fill() method. Both the methods should be called before you actually draw the shape.
The following code draws a blue triangle with a green outline:
If you run your app now, you'll be able to see the blue triangle, but you'll also notice that every other shape has also turned blue.
If the reason isn't obvious to you already, remember that the draw() method is called repeatedly. That means that any configuration parameter you change during a draw cycle will have an effect on subsequent draw cycles. So in order to make sure that all your shapes are drawn with the right colors, it is a good idea to always explicitly specify the color of every shape you draw, right before you draw it.
For instance, by adding the following code at the beginning of the draw() method, you can make the other shapes white again.
// Set the fill and stroke to white and black
// before drawing the rectangles and ellipses
stroke(Color.BLACK)
fill(Color.WHITE)
At this point, the canvas will look like this:
5. Handling Touch Events
With Processing, handling touch events is extremely easy. You don't need any event handlers whatsoever. All you need to do is check if a boolean variable named mousePressed is true to know when the user is touching the screen. Once you've confirmed that the user is touching the screen, you can use the mouseX and mouseY variables to determine the X and Y coordinates of the touch.
For example, the following code draws a new rectangle wherever the user touches the canvas.
// Check if user is touching the canvas
if(mousePressed) {
// Specify fill and stroke colors
stroke(Color.RED)
fill(Color.YELLOW)
// Draw rectangle
rect(mouseX.toFloat(), mouseY.toFloat(), 100f, 100f)
}
If you run your app now and drag your finger across the screen, you should see a lot of yellow rectangles being drawn.
Before we move on, here's a quick tip: if at any point you wish to clear the canvas, you can simply call the background() method again.
background(Color.parseColor("#FF8A80")) // Material Red A100
6. Working With Pixels
There's only so far you can get with simple primitives. If you are interested in creating intricate and complex artwork, you'll probably need access to the individual pixels of the canvas.
By calling the loadPixels() method, you can load the colors of all the pixels of the canvas into an array named pixels. By modifying the contents of the array, you can very efficiently modify the contents of the canvas. Lastly, once you've finished modifying the array, you should remember to call the updatePixels() method to render the new set of pixels.
Note that the pixels array is a one-dimensional, integer array whose size is equal to the product of the width and height of the canvas. Because the canvas is two-dimensional, converting the X and Y coordinates of a pixel into a valid index of the array involves use of the following formula:
// index = xCoordinate + yCoordinate * widthOfCanvas
The following example code, which sets the color of each pixel of the canvas based on its X and Y coordinates, should help you better understand how to use the pixels array:
override fun draw() {
loadPixels() // Load array
// loop through all valid coordinates
for(y in 0..height - 1) {
for(x in 0..width - 1) {
// Calculate index
val index = x + y * width
// Update pixel at index with a new color
pixels[index] = Color.rgb(x % 255, y % 255, (x*y) % 255)
}
}
// Render pixels with new colors
updatePixels()
}
The Color.rgb() method you see above converts individual red, green, and blue values to an integer that represents a single color value that the Processing framework understands. Feel free to modify the arguments you pass to it, but do make sure that they are always within the range 0 to 255.
If you choose to run the code without any modifications, you should see a pattern that looks like this:
Conclusion
You now know how to create 2D graphics using the Processing language. With the skills you learned today, you can not only make your Android apps more appealing, but also create full-fledged games from scratch. You are limited only by your creativity!
To learn more about Processing, I suggest you spend some time browsing through the official reference pages.
And while you're here, check out some of our other posts on Android app development!
Along with many other things which have quickly been replaced by our modern technology, it looks as if the common tape measure may be the next to go. In this two-part tutorial series, we're learning how to use augmented reality and the camera on your iOS device to create an app which will report the distance between two points.
In the first post, we created the app project and coded its main interface elements. In this post, we'll finish it off by measuring between two points in the AR scene. If you haven't yet, follow along with the first post to get your ARKit project set up.
Here's one of the biggest parts of this tutorial: handling when the user taps on their world to get a sphere to appear exactly where they tapped. Later, we'll calculate the distance between these spheres to finally show the user their distance.
Tap Gesture Recognizer
The first step in checking for taps is to create a tap gesture recognizer when the app launches. To do this, create a tap handler as follows:
// Creates a tap handler and then sets it to a constant
let tapRecognizer = UITapGestureRecognizer(target: self, action: #selector(handleTap))
The first line creates an instance of the UITapGestureRecognizer() class and passes in two parameters at the initialization: the target and the action. The target is the recipient of the notifications which this recognizer sends, and we want our ViewController class to be the target. The action is simply a method which should be called each time there is a tap.
To set the number of taps, add this:
// Sets the amount of taps needed to trigger the handler
tapRecognizer.numberOfTapsRequired = 1
Next, the instance of the class we created earlier needs to know how many taps are actually needed to activate the recognizer. In our case, we just need one tap, but in other apps, you may need to have more (such as a double tap) for some cases.
Add the handler to the scene view like this:
// Adds the handler to the scene view
sceneView.addGestureRecognizer(tapRecognizer)
Lastly, this single line of code just adds the gesture recognizer to the sceneView, which is where we'll be doing everything. This is where the preview of the camera will be as well as what the user will directly tap in order to get a sphere to appear on the screen, so it makes sense to add the recognizer to the view with which the user will interact.
Handle Tap Method
When we created the UITapGestureRecognizer(), you may remember that we set a handleTap method to the action. Now, we're ready to declare that method. To do this, simply add the following to your app:
@objc func handleTap(sender: UITapGestureRecognizer) {
// Your code goes here
}
Though the function declaration may be pretty self-explanatory, you may wonder why there is an @objc tag in front of it. As of the current version of Swift, to expose methods to Objective-C, you need this tag. All you need to know is that #selector needs the referred method to be available to Objective-C. Lastly, the method parameter will let us get the exact location which was tapped on the screen.
Location Detection
The next step in getting our spheres to appear where the user tapped is to detect the exact position which they tapped. Now, this isn't as simple as getting the location and placing a sphere, but I am sure that you'll master it in no time.
Start by adding the following three lines of code to your handleTap() method:
// Gets the location of the tap and assigns it to a constant
let location = sender.location(in: sceneView)
// Searches for real world objects such as surfaces and filters out flat surfaces
let hitTest = sceneView.hitTest(location, types: [ARHitTestResult.ResultType.featurePoint])
// Assigns the most accurate result to a constant if it is non-nil
guard let result = hitTest.last else { return }
If you remember the parameter we took in the handleTap() method, you may recall that it was named sender, and it was of type UITapGestureRecognizer. Well, this first line of code simply takes the location of the tap on the screen (relative to the scene view), and sets it to a constant named location.
Next, we're doing something called a hit test on the SceneView itself. What this does, in simple terms, is to check the scene for real objects, such as tables, surfaces, walls, floors, etc. This allows us to get a sense of depth and to get pretty accurate measurements between two points. In addition, we're specifying the types of objects to detect, and as you can see, we're telling it to look for featurePoints, which are essentially flat surfaces, which makes sense for a measuring app.
Lastly, the line of code takes the most accurate result, which in the case of hitTest is the last result, and checks if it isn't nil. If it is, it ignores the rest of the lines in this method, but if there is indeed a result, it will be assigned to a constant called result.
Matrices
If you think back to your high-school algebra class, you may remember matrices, which might not have seemed as important back then as they are right now. They're commonly used in computer graphics related tasks, and we'll be getting a glimpse of them in this app.
Add the following lines to your handleTap() method, and we'll go over them in detail:
// Converts the matrix_float4x4 to an SCNMatrix4 to be used with SceneKit
let transform = SCNMatrix4.init(result.worldTransform)
// Creates an SCNVector3 with certain indexes in the matrix
let vector = SCNVector3Make(transform.m41, transform.m42, transform.m43)
// Makes a new sphere with the created method
let sphere = newSphere(at: vector)
Before getting into the first line of code, it's important to understand that the hit test we did earlier returns a type of matrix_float4x4, which is essentially a four-by-four matrix of float values. Since we are inSceneKit, though, we'll need to convert it into something that SceneKit can understand—in this case, to a SCNMatrix4.
Then, we'll use this matrix to create a SCNVector3, which, as its name suggests, is a vector with three components. As you may have guessed, those components are x, y, and z, to give us a position in space. transform.m41, transform.m42, and transform.m43 are the relevant coordinate values for the three component vectors.
Lastly, let's use the newSphere() method that we created earlier, along with the location information we parsed from the touch event, to make a sphere and assign it to a constant called sphere.
Solving the Double-Tap Bug
Now, you may have realized a slight flaw in our code; if the user keeps tapping, a new sphere would keep getting created. We don't want this because it makes it hard to determine which spheres need to be measured. Also, it's difficult for the user to keep track of all the spheres!
Solving With Arrays
The first step to solve this is to create an array at the top of the class.
var spheres: [SCNNode] = []
This is an array of SCNNodes because that's the type that we returned from our newSphere() method that we created towards the beginning of this tutorial. Later on, we'll put the spheres in this array and check how many there are. Based on that, we'll be able to manipulate their numbers by removing and adding them.
Optional Binding
Next, we'll use a series of if-else statements and for loops to figure out if there are any spheres in the array or not. For starters, add the following optional binding to your app:
if let first = spheres.first {
// Your code goes here
} else {
// Your code goes here
}
First, we're checking if there are any items in the spheres array, and if not, execute the code in the else clause.
Auditing the Spheres
After that, add the following to the first part (the if branch) of your if-elsestatement:
// Adds a second sphere to the array
spheres.append(sphere)
print(sphere.distance(to: first))
// If more that two are present...
if spheres.count > 2 {
// Iterate through spheres array
for sphere in spheres {
// Remove all spheres
sphere.removeFromParentNode()
}
// Remove extraneous spheres
spheres = [spheres[2]]
}
Since we're already in a tap event, we know that we're creating another sphere. So if there is already one sphere, we need to get the distance and display it to the user. You can call the distance() method on the sphere, because later, we'll create an extension of SCNNode.
Next, we need to know if there are already more than the maximum of two spheres. To do this, we're just using the count property of our spheres array and an if statement. We iterate through all of the spheres in the array and remove them from the scene. (Don't worry, we'll some of them back later.)
Finally, since we're already in the if statement which tells us that there are more than two spheres, we can remove the third one in the array so that we ensure that only two are left in the array at all times.
Adding the Spheres
Finally, in the else clause, we know that the spheres array is empty, so what we need to do is to just add the sphere that we created at the time of the method call. Inside your else clause, add this:
// Add the sphere
spheres.append(sphere)
Yay! We just added the sphere to our spheres array, and our array is ready for the next tap. We've now prepared our array with the spheres that should be on the screen, so now, let's just add these to the array.
In order to iterate through and add the spheres, add this code:
// Iterate through spheres array
for sphere in spheres {
// Add all spheres in the array
self.sceneView.scene.rootNode.addChildNode(sphere)
}
This is just a simple for loop, and we're adding the spheres (SCNNode) as a child of the scene's root node. In SceneKit, this is the preferred way to add things.
Full Method
Here's what the final handleTap() method should look like:
@objc func handleTap(sender: UITapGestureRecognizer) {
let location = sender.location(in: sceneView)
let hitTest = sceneView.hitTest(location, types: [ARHitTestResult.ResultType.featurePoint])
guard let result = hitTest.last else { return }
let transform = SCNMatrix4.init(result.worldTransform)
let vector = SCNVector3Make(transform.m41, transform.m42, transform.m43)
let sphere = newSphere(at: vector)
if let first = spheres.first {
spheres.append(sphere)
print(sphere.distance(to: first))
if spheres.count > 2 {
for sphere in spheres {
sphere.removeFromParentNode()
}
spheres = [spheres[2]]
}
} else {
spheres.append(sphere)
}
for sphere in spheres {
self.sceneView.scene.rootNode.addChildNode(sphere)
}
}
Calculating Distances
Now, if you'll remember, we called a distance(to:) method on our SCNNode, the sphere, and I'm sure that Xcode is yelling at you for using an undeclared method. Let's end that now, by creating an extension of the SCNNode class.
To create an extension, just do the following outside of your ViewController class:
extension SCNNode {
// Your code goes here
}
This simply lets you alter the class (it's as if you were editing the actual class). Then, we'll add a method which will compute the distance between two nodes.
Here's the function declaration to do that:
func distance(to destination: SCNNode) -> CGFloat {
// Your code goes here
}
If you'll see, there's a parameter which is another SCNNode, and it returns a CGFloat as the result. For the actual calculation, add this to your distance() function:
let dx = destination.position.x - position.x
let dy = destination.position.y - position.y
let dz = destination.position.z - position.z
let inches: Float = 39.3701
let meters = sqrt(dx*dx + dy*dy + dz*dz)
return CGFloat(meters * inches)
The first three lines of code subtract the x, y, and z positions of the current SCNNode from the coordinates of the node passed as a parameter. We'll later plug these values into the distance formula to get their distance. Also, because I want the result in inches, I've created a constant for the conversion rate between meters and inches for easy conversion later on.
Now, to get the distance between the two nodes, think back to your middle-school math class: you may remember the distance formula for the Cartesian plane. Here, we're applying it to points in three-dimensional space.
Finally, we return the value multiplied by the inches conversion ratio to get the appropriate unit of measure. If you live outside the United States, you can leave it in meters or convert it to centimeters if you wish.
Conclusion
Well, that's a wrap! Here's what your final project should look like:
As you can see, the measurements aren't perfect, but it thinks a 15-inch computer is around 14.998 inches, so it's not bad!
You now know how to measure distances using Apple's new library, ARKit. This app can be used for many things, and I challenge you to think of different ways that this can be used in the real world, and be sure to leave your thoughts in the comments below.
Also, be sure to check out the GitHub repo for this app. And while you're still here, check out our other iOS development tutorials here on Envato Tuts+!
Along with many other things which have quickly been replaced by our modern technology, it looks as if the common tape measure may be the next to go. In this two-part tutorial series, we're learning how to use augmented reality and the camera on your iOS device to create an app which will report the distance between two points.
In the first post, we created the app project and coded its main interface elements. In this post, we'll finish it off by measuring between two points in the AR scene. If you haven't yet, follow along with the first post to get your ARKit project set up.
Here's one of the biggest parts of this tutorial: handling when the user taps on their world to get a sphere to appear exactly where they tapped. Later, we'll calculate the distance between these spheres to finally show the user their distance.
Tap Gesture Recognizer
The first step in checking for taps is to create a tap gesture recognizer when the app launches. To do this, create a tap handler as follows:
// Creates a tap handler and then sets it to a constant
let tapRecognizer = UITapGestureRecognizer(target: self, action: #selector(handleTap))
The first line creates an instance of the UITapGestureRecognizer() class and passes in two parameters at the initialization: the target and the action. The target is the recipient of the notifications which this recognizer sends, and we want our ViewController class to be the target. The action is simply a method which should be called each time there is a tap.
To set the number of taps, add this:
// Sets the amount of taps needed to trigger the handler
tapRecognizer.numberOfTapsRequired = 1
Next, the instance of the class we created earlier needs to know how many taps are actually needed to activate the recognizer. In our case, we just need one tap, but in other apps, you may need to have more (such as a double tap) for some cases.
Add the handler to the scene view like this:
// Adds the handler to the scene view
sceneView.addGestureRecognizer(tapRecognizer)
Lastly, this single line of code just adds the gesture recognizer to the sceneView, which is where we'll be doing everything. This is where the preview of the camera will be as well as what the user will directly tap in order to get a sphere to appear on the screen, so it makes sense to add the recognizer to the view with which the user will interact.
Handle Tap Method
When we created the UITapGestureRecognizer(), you may remember that we set a handleTap method to the action. Now, we're ready to declare that method. To do this, simply add the following to your app:
@objc func handleTap(sender: UITapGestureRecognizer) {
// Your code goes here
}
Though the function declaration may be pretty self-explanatory, you may wonder why there is an @objc tag in front of it. As of the current version of Swift, to expose methods to Objective-C, you need this tag. All you need to know is that #selector needs the referred method to be available to Objective-C. Lastly, the method parameter will let us get the exact location which was tapped on the screen.
Location Detection
The next step in getting our spheres to appear where the user tapped is to detect the exact position which they tapped. Now, this isn't as simple as getting the location and placing a sphere, but I am sure that you'll master it in no time.
Start by adding the following three lines of code to your handleTap() method:
// Gets the location of the tap and assigns it to a constant
let location = sender.location(in: sceneView)
// Searches for real world objects such as surfaces and filters out flat surfaces
let hitTest = sceneView.hitTest(location, types: [ARHitTestResult.ResultType.featurePoint])
// Assigns the most accurate result to a constant if it is non-nil
guard let result = hitTest.last else { return }
If you remember the parameter we took in the handleTap() method, you may recall that it was named sender, and it was of type UITapGestureRecognizer. Well, this first line of code simply takes the location of the tap on the screen (relative to the scene view), and sets it to a constant named location.
Next, we're doing something called a hit test on the SceneView itself. What this does, in simple terms, is to check the scene for real objects, such as tables, surfaces, walls, floors, etc. This allows us to get a sense of depth and to get pretty accurate measurements between two points. In addition, we're specifying the types of objects to detect, and as you can see, we're telling it to look for featurePoints, which are essentially flat surfaces, which makes sense for a measuring app.
Lastly, the line of code takes the most accurate result, which in the case of hitTest is the last result, and checks if it isn't nil. If it is, it ignores the rest of the lines in this method, but if there is indeed a result, it will be assigned to a constant called result.
Matrices
If you think back to your high-school algebra class, you may remember matrices, which might not have seemed as important back then as they are right now. They're commonly used in computer graphics related tasks, and we'll be getting a glimpse of them in this app.
Add the following lines to your handleTap() method, and we'll go over them in detail:
// Converts the matrix_float4x4 to an SCNMatrix4 to be used with SceneKit
let transform = SCNMatrix4.init(result.worldTransform)
// Creates an SCNVector3 with certain indexes in the matrix
let vector = SCNVector3Make(transform.m41, transform.m42, transform.m43)
// Makes a new sphere with the created method
let sphere = newSphere(at: vector)
Before getting into the first line of code, it's important to understand that the hit test we did earlier returns a type of matrix_float4x4, which is essentially a four-by-four matrix of float values. Since we are inSceneKit, though, we'll need to convert it into something that SceneKit can understand—in this case, to a SCNMatrix4.
Then, we'll use this matrix to create a SCNVector3, which, as its name suggests, is a vector with three components. As you may have guessed, those components are x, y, and z, to give us a position in space. transform.m41, transform.m42, and transform.m43 are the relevant coordinate values for the three component vectors.
Lastly, let's use the newSphere() method that we created earlier, along with the location information we parsed from the touch event, to make a sphere and assign it to a constant called sphere.
Solving the Double-Tap Bug
Now, you may have realized a slight flaw in our code; if the user keeps tapping, a new sphere would keep getting created. We don't want this because it makes it hard to determine which spheres need to be measured. Also, it's difficult for the user to keep track of all the spheres!
Solving With Arrays
The first step to solve this is to create an array at the top of the class.
var spheres: [SCNNode] = []
This is an array of SCNNodes because that's the type that we returned from our newSphere() method that we created towards the beginning of this tutorial. Later on, we'll put the spheres in this array and check how many there are. Based on that, we'll be able to manipulate their numbers by removing and adding them.
Optional Binding
Next, we'll use a series of if-else statements and for loops to figure out if there are any spheres in the array or not. For starters, add the following optional binding to your app:
if let first = spheres.first {
// Your code goes here
} else {
// Your code goes here
}
First, we're checking if there are any items in the spheres array, and if not, execute the code in the else clause.
Auditing the Spheres
After that, add the following to the first part (the if branch) of your if-elsestatement:
// Adds a second sphere to the array
spheres.append(sphere)
print(sphere.distance(to: first))
// If more that two are present...
if spheres.count > 2 {
// Iterate through spheres array
for sphere in spheres {
// Remove all spheres
sphere.removeFromParentNode()
}
// Remove extraneous spheres
spheres = [spheres[2]]
}
Since we're already in a tap event, we know that we're creating another sphere. So if there is already one sphere, we need to get the distance and display it to the user. You can call the distance() method on the sphere, because later, we'll create an extension of SCNNode.
Next, we need to know if there are already more than the maximum of two spheres. To do this, we're just using the count property of our spheres array and an if statement. We iterate through all of the spheres in the array and remove them from the scene. (Don't worry, we'll some of them back later.)
Finally, since we're already in the if statement which tells us that there are more than two spheres, we can remove the third one in the array so that we ensure that only two are left in the array at all times.
Adding the Spheres
Finally, in the else clause, we know that the spheres array is empty, so what we need to do is to just add the sphere that we created at the time of the method call. Inside your else clause, add this:
// Add the sphere
spheres.append(sphere)
Yay! We just added the sphere to our spheres array, and our array is ready for the next tap. We've now prepared our array with the spheres that should be on the screen, so now, let's just add these to the array.
In order to iterate through and add the spheres, add this code:
// Iterate through spheres array
for sphere in spheres {
// Add all spheres in the array
self.sceneView.scene.rootNode.addChildNode(sphere)
}
This is just a simple for loop, and we're adding the spheres (SCNNode) as a child of the scene's root node. In SceneKit, this is the preferred way to add things.
Full Method
Here's what the final handleTap() method should look like:
@objc func handleTap(sender: UITapGestureRecognizer) {
let location = sender.location(in: sceneView)
let hitTest = sceneView.hitTest(location, types: [ARHitTestResult.ResultType.featurePoint])
guard let result = hitTest.last else { return }
let transform = SCNMatrix4.init(result.worldTransform)
let vector = SCNVector3Make(transform.m41, transform.m42, transform.m43)
let sphere = newSphere(at: vector)
if let first = spheres.first {
spheres.append(sphere)
print(sphere.distance(to: first))
if spheres.count > 2 {
for sphere in spheres {
sphere.removeFromParentNode()
}
spheres = [spheres[2]]
}
} else {
spheres.append(sphere)
}
for sphere in spheres {
self.sceneView.scene.rootNode.addChildNode(sphere)
}
}
Calculating Distances
Now, if you'll remember, we called a distance(to:) method on our SCNNode, the sphere, and I'm sure that Xcode is yelling at you for using an undeclared method. Let's end that now, by creating an extension of the SCNNode class.
To create an extension, just do the following outside of your ViewController class:
extension SCNNode {
// Your code goes here
}
This simply lets you alter the class (it's as if you were editing the actual class). Then, we'll add a method which will compute the distance between two nodes.
Here's the function declaration to do that:
func distance(to destination: SCNNode) -> CGFloat {
// Your code goes here
}
If you'll see, there's a parameter which is another SCNNode, and it returns a CGFloat as the result. For the actual calculation, add this to your distance() function:
let dx = destination.position.x - position.x
let dy = destination.position.y - position.y
let dz = destination.position.z - position.z
let inches: Float = 39.3701
let meters = sqrt(dx*dx + dy*dy + dz*dz)
return CGFloat(meters * inches)
The first three lines of code subtract the x, y, and z positions of the current SCNNode from the coordinates of the node passed as a parameter. We'll later plug these values into the distance formula to get their distance. Also, because I want the result in inches, I've created a constant for the conversion rate between meters and inches for easy conversion later on.
Now, to get the distance between the two nodes, think back to your middle-school math class: you may remember the distance formula for the Cartesian plane. Here, we're applying it to points in three-dimensional space.
Finally, we return the value multiplied by the inches conversion ratio to get the appropriate unit of measure. If you live outside the United States, you can leave it in meters or convert it to centimeters if you wish.
Conclusion
Well, that's a wrap! Here's what your final project should look like:
As you can see, the measurements aren't perfect, but it thinks a 15-inch computer is around 14.998 inches, so it's not bad!
You now know how to measure distances using Apple's new library, ARKit. This app can be used for many things, and I challenge you to think of different ways that this can be used in the real world, and be sure to leave your thoughts in the comments below.
Also, be sure to check out the GitHub repo for this app. And while you're still here, check out our other iOS development tutorials here on Envato Tuts+!
Since Android 1.5, application widgets have enabled users to get information, control apps, and perform crucial tasks, all from the comfort of their homescreens.
In this two part series, I’ll be showing you how to provide a better user experience by adding an application widget to your Android projects.
By the end of the series, you’ll have created a widget that:
Displays multiple sets of data.
Performs a unique action when the user interacts with a specific View within that widget’s layout.
Updates automatically whenever a set period of time has elapsed.
Updates with new data in response to user interaction.
In this first post, we’ll be using Android Studio’s built-in tools to quickly and easily generate all the files required to deliver any Android application widget. We’ll then expand on this foundation, to create a widget that retrieves and displays data, and responds to onClick events.
What Are Application Widgets?
An application widget is a lightweight, miniature app that typically falls into one of the following categories:
Information widget. A non-scrollable widget that displays important information, such as a weather or clock widget.
Collection widgets. A scrollable widget that displays a series of related elements, such as a gallery of photos or articles from the same publication. Collection widgets are typically backed by a data source, such as an Array or database. A collection widget must include either a ListView, GridView, StackView, or an AdapterViewFlipper.
Control widgets. A widget that acts as a remote control for your application, allowing the user to trigger frequently-used functions without necessarily having to launch your application. Applications that play music often provide a widget that lets the user Play, Pause and Skip tracks directly from their homescreen.
Hybrid widgets. Why restrict yourself to one category, when you can cherry-pick elements from multiple categories? Just be aware that mixing and matching can lead to a confusing user experience, so for the best results you should design your widget with a single category in mind, and then add elements from other categories, as required. For example, if you wanted to create a widget that displays today’s weather forecast, but also allows users to view the forecast for different days and locations, then you should create an information widget, and then add the necessary control elements afterwards.
In addition to the above functionality, most widgets respond to onClick events by launching their associated application, similar to an application shortcut, but they can also provide direct access to specific content within that application.
Application widgets must be placed inside an App Widget Host, most commonly the stock Android homescreen, although there are some third party App Widget Hosts, such as the popular Nova Launcher and Apex Launcher.
Throughout this series I’ll be talking about widgets as something you place on the homescreen, but if you have a vague recollection of being able to place widgets on the lockscreen, then this wasn’t just some kind of wonderful dream! Between API levels 17 and 20, it was possible to place widgets on the homescreen or the lockscreen.
Since lockscreen widgets were deprecated in API level 21, in this series we’ll be creating a widget for the homescreen only.
Why Should I Create an Application Widget?
There are several reasons why you should consider adding an application widget to your latest Android project.
Easy Access to Important Information and Features
Widgets allow the user to view your app’s most important information, directly from their homescreen. For example, if you’ve developed a calendar app then you might create a widget that displays details about the user’s next appointment. This is far more convenient than forcing the user to launch your app and potentially navigate multiple screens, just to retrieve the same information.
If you develop a control widget (or a hybrid widget with control elements) then the user can also complete tasks directly from their homescreen. Continuing with our calendar example, your widget might allow the user to create, edit and cancel appointments, potentially without even having to launch your app. This has the potential to remove multiple navigation steps from some of your app’s most important tasks, which can only have a positive impact on the user experience!
Direct Access to All of Your App’s Most Important Screens
Tapping a widget typically takes the user to the top level of the associated application, similar to an application shortcut. However, unlike app shortcuts, widgets can link to specific areas within the associated application. For example, tapping a widget’s New email received notification might launch the application with the new message already selected, while tapping Create new email might take them directly to your app’s ComposeEmail Activity.
By embedding multiple links in your widget’s layout, you can provide convenient, one-tap access to all of your app’s most important Activities.
Create a Loyal, Engaged User Base
As the whole Pokemon Go explosion and subsequent drop-off proved, getting a tonne of people to download your app doesn’t automatically guarantee a loyal user base who will still be using your app days, weeks or even months down the line.
Mobile users are a pretty fickle bunch, and with the memory available on your typical Android smartphone or tablet increasing all the time, it’s easy to lose track of the apps you’ve installed on your device. Chances are, if you pick up your Android smartphone or tablet now and swipe through the app drawer then you’ll discover at least one application that you’ve completely forgotten about.
By creating a widget that showcases all of your app’s most valuable information and features, you ensure that each time the user glances at their homescreen they’re reminded not only that your app exists, but also that it has some great content.
Adding an App Widget to Your Project
Even the most basic widget requires multiple classes and resources, but when you create a widget using Android Studio’s built-in tools, all of these files are generated for you. Since there’s no point in making Android development any harder than it needs to be, we’ll be using these tools to get a head-start on building our widget.
An application widget must always be tied to an underlying app, so create a new Android project with the settings of your choice.
Once Android Studio has built your project, select File > New > Widget > AppWidget from the Android Studio toolbar. This launches a Configure Component menu where you can define some of your widget’s initial settings.
Most of these options are pretty self-explanatory, but there’s a few that are worth exploring in more detail.
Resizable (API 12+)
If a widget is resizable, then the user can increase or decrease the number of “cells” it occupies on their homescreen, by long-pressing the widget and then dragging the blue handles that appear around its outline.
Wherever possible, you should give your widget the ability to resize horizontally and vertically, as this will help your widget adapt to a range of screen configurations and homescreen setups. If a user has a seriously cluttered homescreen, then it may be impossible for your widget to even fit onto that homescreen, unless your widget is resizable.
If you do want to create a non-resizable widget, then open the Resizable dropdown menu and select either Only horizontally, Only vertically, or Not resizable.
Minimum Width and Height
The minimum width and height specifies the number of cells your widget will initially occupy when it’s placed on the homescreen.
For resizable widgets, this is the smallest the user can size your widget, so you can use these values to prevent users from shrinking your widget to the point where it becomes unusable.
If your widget isn’t resizable, then the minimum width and height values are your widget’s permanent width and height.
To increase a widget’s chances of fitting comfortably across a range of homescreens, it’s recommended that you never use anything larger than 4 by 4 for the minimum width and height values.
While the exact width and height of a homescreen “cell” varies between devices, you can get a rough estimate of how many DPIs (dots per inch) your widget will occupy using the following formula:
70 × number of cells -30
For example, if your widget is 2 x 3 cells:
70 x 2 - 30 = 110
70 x 3 - 30 = 180
This widget will occupy around 110 x 180 DPIs on the user’s homescreen. If these values don’t align with the dimensions of a particular device’s cells, then Android will automatically round your widget to the nearest cell size.
Review all the options in this menu and make any desired changes (I’m sticking with the defaults) and then click Finish.
Android Studio will now generate all the files and resources required to deliver a basic application widget. This widget isn’t exactly exciting (it’s basically just a blue block with the word Example written across it) but it is a functional widget that you can test on your device.
To test the widget:
Install your project on a physical Android device or AVD (Android Virtual Device).
Launch Android’s Widget Pickerby pressing any empty space on the homescreen, then tapping the word Widget that appears towards the bottom of the screen.
Swipe through the Widget Picker until you spot the blue Example widget.
Press down on this widget to drop it onto your homescreen.
Enter resize mode by pressing the widget until a set of blue handles appear, and then drag these handles to increase or decrease the number of cells that this widget occupies.
Exploring the Application Widget Files
This widget might not do all that much, but it includes all the classes and resources that we’ll be working on throughout the rest of this series, so let’s take a look at these files, and the role they play in delivering an application widget.
NewAppWidget.java
The widget provider is a convenience class containing the methods used to programmatically interface with a widget via broadcast events. Under the hood, a widget is essentially just a BroadcastReceiver that can respond to various actions, such as the user placing a new widget instance on their homescreen.
Most notably, the app widget provider is where you’ll define your widget’s lifecycle methods, which either get called for every instance of the widget, or for specific instances only.
Although we tend to think of a widget as a single entity that the user places on their homescreen once, there’s nothing to prevent them from creating multiple instances of the same widget. Maybe your widget is customisable, to the point where different instances can have significantly different functionality, or maybe the user just loves your widget so much that they want to plaster it all over their homescreen!
Let’s take a look at the different lifecycle methods that you can implement in the widget provider class:
The onReceive Event
Android calls the onReceive() method on the registered BroadcastReceiver whenever the specified event occurs.
You typically won’t need to implement this method manually, as the AppWidgetProvider class automatically filters all widget broadcasts and delegates operations to the appropriate methods.
The onEnabled Event
The onEnabled() lifecycle method is called in response to ACTION_APPWIDGET_ENABLED, which is broadcast when the user adds the first instance of your widget to their homescreen. If the user creates two instances of your widget, then onEnabled() is called for the first instance, but not for the second.
This lifecycle method is where you perform any setup that only needs to occur once for all widget instances, such as creating a database or setting up a service.
Note that if the user deletes all instances of your widget from their device, and then creates a new instance, then this is classed as the first instance, and consequently the onEnabled() method will be called once again.
The onAppWidgetOptionsChanged Event
This lifecycle method is called in response to ACTION_APPWIDGET_OPTIONS_CHANGED, which is broadcast when a widget instance is created, and every time that widget is resized. You can use this method to reveal or hide content based on how the user sizes your widget, although this callback is only supported in Android 4.1 and higher.
The onUpdate Event
The onUpdate() lifecycle method is called every time:
The update interval has elapsed.
The user performs some action that triggers the onUpdate() method.
The user places a new instance of the widget on their homescreen (unless your widget contains a configuration Activity, which we’ll be covering in part two).
The onUpdate() lifecycle method is also called in response to ACTION_APPWIDGET_RESTORED, which is broadcast whenever a widget is restored from backup.
For most projects, the onUpdate() method will contain the bulk of the widget provider code, especially since it’s also where you register your widget’s event handlers.
The onDeleted Event
The onDeleted() method is called every time an instance of your widget is deleted from the App Widget Host, which triggers the system’s ACTION_APPWIDGET_DELETED broadcast.
The onDisabled Event
This method is called in response to the ACTION_APPWIDGET_DISABLED broadcast, which is sent when the last instance of your widget is removed from the App Widget Host. For example, if the user created three instances of your widget, then the onDisabled() method would only be called when the user removes the third and final instance from their homescreen.
The onDisabled() lifecycle method is where you should cleanup any resources you created in onEnabled(), so if you setup a database in onEnabled() then you’ll delete it in onDisabled().
The onRestored Event
The onRestored() method is called in response to ACTION_APPWIDGET_RESTORED, which is broadcast whenever an instance of an application widget is restored from backup. If you want to maintain any persistent data, then you’ll need to override this method and remap the previous AppWidgetIds to the new values, for example:
If you open the NewAppWidget.java file that Android Studio generated automatically, then you’ll see that it already contains implementations for some of these widget lifecycle methods:
import android.appwidget.AppWidgetManager;
import android.appwidget.AppWidgetProvider;
import android.content.Context;
import android.widget.RemoteViews;
//All widgets extend the AppWidgetProvider class//
public class NewAppWidget extends AppWidgetProvider {
static void updateAppWidget(Context context, AppWidgetManager appWidgetManager,
int appWidgetId) {
CharSequence widgetText = context.getString(R.string.appwidget_text);
//Load the layout resource file into a RemoteViews object//
RemoteViews views = new RemoteViews(context.getPackageName(), R.layout.new_app_widget);
views.setTextViewText(R.id.appwidget_text, widgetText);
//Tell the AppWidgetManager about the updated RemoteViews object//
appWidgetManager.updateAppWidget(appWidgetId, views);
}
//Define the onUpdate lifecycle method//
@Override
public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) {
//appWidgetIds is an array of IDs that identifies every instance of your widget, so this
//particular onUpdate() method will update all instances of our application widget//
for (int appWidgetId : appWidgetIds) {
updateAppWidget(context, appWidgetManager, appWidgetId);
}
}
@Override
//Define the onEnabled lifecycle method//
public void onEnabled(Context context) {
//To do//
}
@Override
//Define the onDisabled method//
public void onDisabled(Context context) {
//To do//
}
}
The Widget Layout File
The res/layout/new_app_widget.xml file defines our widget’s layout, which is currently just a blue background with the word Example written across it.
The major difference between creating a layout for an Activity and creating a layout for a widget, is that widget layouts must be based on RemoteViews, as this allows Android to display the layout in a process outside of your application (i.e on the user’s homescreen).
RemoteViews don’t support every kind of layout or View, so when building a widget layout, you’re limited to the following types:
AnalogClock
Button
Chromometer
FrameLayout
GridLayout
ImageButton
ImageView
LinearLayout
ProgressBar
RelativeLayout
TextView
ViewStub
If you’re creating a collection widget, then you can also use the following types when your application is installed on Android 3.0 and higher:
AdapterViewFlipper
GridView
ListView
StackView
ViewFlipper
Subclasses and descendants of the above Views and classes are not supported.
Clicks and Swipes
To ensure users don’t accidentally interact with a widget while they’re navigating around their homescreen, widgets respond to onClick events only.
The exception is when the user removes a widget by dragging it towards their homescreen’s Uninstall action, as in this situation your widget will respond to the vertical swipe gesture. However, since this interaction is managed by the Android system, you don’t need to worry about implementing vertical swipe support in your application.
The Widget Info File
The res/xml/new_app_widget_info.xml file (also known as the AppWidgetProviderInfo file) defines a number of widget properties, including many of the settings you selected in Android Studio’s Configure Component menu, such as your widget’s minimum dimensions and whether it can be placed on the lockscreen.
The configuration file also specifies how frequently your widget requests new information from the App Widget update service. Deciding on this frequency requires you to strike a tricky balance: longer update intervals will help conserve the device’s battery, but place your intervals too far apart and your widget may display noticeably out-of-date information.
You should also be aware that the system will wake a sleeping device in order retrieve new information, so although updating your widget once every half an hour may not sound excessive, it could result in your widget waking the device once every 30 minutes, which is going to affect battery consumption.
If you open your project’s new_app_widget_info.xml file, then you’ll see that it already defines a number of widget properties, including the update interval.
<?xml version="1.0" encoding="utf-8"?><appwidget-provider xmlns:android="http://schemas.android.com/apk/res/android"
//The layout your widget should use when it’s placed on the lockscreen on supported devices//
android:initialKeyguardLayout="@layout/new_app_widget"
//The layout your widget should use when it’s placed on the homescreen//
android:initialLayout="@layout/new_app_widget"
//The minimum space your widget consumes, which is also its initial size//
android:minHeight="40dp"
android:minWidth="40dp"
//The drawable that represents your widget in the Widget Picker//
android:previewImage="@drawable/example_appwidget_preview"
//Whether the widget can be resized horizontally, vertically, or along both axes, on Android 3.1 and higher//
android:resizeMode="horizontal|vertical"
//How frequently your widget should request new information from the app widget provider//
android:updatePeriodMillis="86400000"
//Whether the widget can be placed on the homescreen, lockscreen (“keyguard”) or both.//
//On Android 5.0 and higher, home_screen is the only valid option//
android:widgetCategory="home_screen"></appwidget-provider>
If you do give your users the option of placing your widget on the lockscreen, then bear in mind that the widget’s contents will be visible to anyone who so much as glances at the lockscreen. If your “default” layout contains any personal or potentially sensitive information, then you should provide an alternative layout for your widget to use when it’s placed on the lockscreen.
The res/values/dimens.xml File
Widgets don’t look their best when they’re pressed against one another, or when they extend to the very edge of the homescreen.
Whenever your widget is displayed on Android 4.0 or higher, the Android operating system automatically inserts some padding between the widget frame and the bounding box.
If your app winds up on a device that’s running anything earlier than Android 4.0, then your widget needs to supply this padding itself.
When you create a widget using the File > New > Widget > AppWidget menu, Android Studio generates two dimens.xml files that guarantee your widget always has the correct padding, regardless of the version of Android it’s installed on.
You’ll find both of these files in your project’s res folder:
res/values/dimens.xml
This file defines the 8dpi of padding that your widget needs to provide whenever it’s installed on API level 13 or earlier.
<dimen name="widget_margin">8dp</dimen>
res/values-v14/dimens.xml
Since Android 4.0 and higher automatically applies padding to every widget, any padding that your widget provides will be in addition to this default padding.
To ensure your widget aligns with any app icons or other widgets that the user has placed on their homescreen, this dimens.xml file specifies that your widget should provide no additional margins for Android 4.0 and higher:
<dimen name="widget_margin">0dp</dimen>
This default margin helps to visually balance the homescreen, so you should avoid modifying it—you don’t want your widget to be the odd one out, after all!
Your widget’s layout already references this dimension value (android:padding="@dimen/widget_margin") so be careful not to change this line while working on your widget’s layout.
Although these dimens.xml files are the easiest way of ensuring your widget always has the correct padding, if this technique isn’t suitable for your particular project, then one alternative is to create multiple nine-patch backgrounds with different margins for API level 14 and higher, and API level 13 and lower. You can create nine-patches using Android Studio’s Draw 9-patch tool, or with a dedicated graphics editing program such as Adobe Photoshop.
The Project Manifest
In your project’s AndroidManifest.xml file, you need to register your widget as a BroadcastReceiver and specify the widget provider and the AppWidgetProviderInfo file that this widget should use.
If you open the manifest, you’ll see that Android Studio has already added all this information for you.
//The widget’s AppWidgetProvider; in this instance that’s NewAppWidget.java//
<receiver android:name=".NewAppWidget"><intent-filter>
//An intent filter for the android.appwidget.action.APPWIDGET_UPDATE action//
<action android:name="android.appwidget.action.APPWIDGET_UPDATE" /></intent-filter><meta-data
android:name="android.appwidget.provider"
//The widget’s AppWidgetProviderInfo object//
android:resource="@xml/new_app_widget_info" /></receiver></application>
Widget Picker Resource
The res/drawable/example_appwidget_preview.png file is the drawable resource that represents your widget in the Widget Picker.
To encourage users to select your widget from all the available options, this drawable should show your widget, properly configured on a homescreen and displaying lots of useful content.
When you create a widget using the File > New > Widget > AppWidget menu, Android Studio generates a preview drawable automatically (example_appwidget_preview.png).
In part two, I’ll be showing you how to quickly and easily replace this stock drawable, by using Android Studio’s built-in tools to generate your own preview image.
Building Your Layout
Now we have an overview of how these files come together to create an application widget, let’s expand on this foundation and create a widget that does more that just display the word Example on a blue background!
We’ll be adding the following functionality to our widget:
A TextView that displays an Application Widget ID label.
A TextView that retrieves and displays the ID for this particular widget instance.
A TextView that responds to onClick events by launching the user’s default browser, and loading a URL.
While we could simply drag three TextViews from the Android Studio palette and drop them onto the canvas, if your widget looks good then users will be more likely to place it on their homescreen, so let’s create some resources that’ll give our widget extra visual appeal.
Create the Widget’s Background
I’m going to create a rectangle with rounded corners, a gradient background, and a border, which I’ll be using as the background for my widget:
Control-click your project’s drawable folder and select New > Drawable resource file.
Finally, open the strings.xml file and define the string resources that we referenced in our layout:
<resources><string name="app_name">Widget</string><string name="widget_id">App Widget ID\u0020</string><string name="URL">Tap to launch URL</string></resources>
Android Studio’s Design tab helps you work more efficiently, by previewing how your layout will render across a range of devices. Switching to the Design tab is far easier than running your project on an Android device every single time you make a change to your layout.
Frustratingly, Android Studio doesn’t supply a dedicated widget skin, so by default your widget’s layout is rendered just like a regular Activity, which doesn’t provide the best insight into how your widget will look on the user’s homescreen.
One potential workaround is to render your layout using the Android Wear (Square) skin, which is comparable to the size and shape of an Android application widget:
Make sure Android Studio’s Device tab is selected.
Open the Device dropdown.
Select 280 x 280, hdpi (Square) from the dropdown menu.
Create the Widget Functionality
Now that our widget looks the part, it’s time to give it some functionality:
Retrieve and display data. Every instance of a widget is assigned an ID when it’s added to the App Widget Host. This ID persists across the widget’s lifecycle, and will be completely unique to that widget instance, even if the user adds multiple instances of the same widget to their homescreen.
Add an action. We’ll create an OnClickListener that launches the user’s default browser and loads a URL.
Open the widget provider file (NewAppWidget.java) and delete the line that retrieves the appwidget_text string resource:
static void updateAppWidget(Context context, AppWidgetManager appWidgetManager,
int appWidgetId) {
//Delete the following line//
CharSequence widgetText = context.getString(R.string.appwidget_text);
RemoteViews views = new RemoteViews(context.getPackageName(), R.layout.new_app_widget);
views.setTextViewText(R.id.appwidget_text, widgetText);
appWidgetManager.updateAppWidget(appWidgetId, views);
}
In the updateAppWidget block, we now need to update the R.id.id_value placeholder with the widget’s unique ID:
RemoteViews views = new RemoteViews(context.getPackageName(), R.layout.new_app_widget);
views.setTextViewText(R.id.id_value, String.valueOf(appWidgetId));
We also need to create an Intent object containing the URL that should load whenever the user interacts with this TextView.
Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse("https://code.tutsplus.com/"));
PendingIntent pendingIntent = PendingIntent.getActivity(context, 0, intent, 0);
//Attach an OnClickListener to our “launch_url” button, using setOnClickPendingIntent//
views.setOnClickPendingIntent(R.id.launch_url, pendingIntent);
Here’s the complete widget provider file:
import android.appwidget.AppWidgetManager;
import android.appwidget.AppWidgetProvider;
import android.content.Context;
import android.widget.RemoteViews;
import android.app.PendingIntent;
import android.content.Intent;
import android.net.Uri;
public class NewAppWidget extends AppWidgetProvider {
static void updateAppWidget(Context context,
AppWidgetManager appWidgetManager,
int appWidgetId) {
//Instantiate the RemoteViews object//
RemoteViews views = new RemoteViews(context.getPackageName(), R.layout.new_app_widget);
//Update your app’s text, using the setTextViewText method of the RemoteViews class//
views.setTextViewText(R.id.id_value, String.valueOf(appWidgetId));
//Register the OnClickListener//
Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse("https://code.tutsplus.com/"));
PendingIntent pendingIntent = PendingIntent.getActivity(context, 0, intent, 0);
views.setOnClickPendingIntent(R.id.launch_url, pendingIntent);
appWidgetManager.updateAppWidget(appWidgetId, views);
}
@Override
public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) {
//Update all instances of this widget//
for (int appWidgetId : appWidgetIds) {
updateAppWidget(context, appWidgetManager, appWidgetId);
}
}
}
Testing the Widget
It’s time to put this widget to the test!
Install the updated project on your Android device.
To ensure you’re seeing the latest version of this widget, remove any existing widget instances from your homescreen.
Press any empty section of the homescreen, and then select your widget from the Widget Picker.
Reposition and resize the widget as desired.
Check that the widget responds to user input events, by selecting the Tap to launch URLTextView. The application widget should respond by launching your default browser, and loading a URL.
If you’ve been following along with this tutorial, then at this point you have a fully-functioning widget that demonstrates many of the core concepts of Android application widgets. You can also download the finished project from our GitHub repo.
Conclusion
In this post we examined all the files required to deliver a Android application widget, before building a widget that retrieves and displays some unique data, and responds to user input events.
Currently, there’s one major piece of functionality still missing from our widget: it never displays any new information! In the next post, we’ll give this widget the ability to retrieve and display new data automatically, based on a set schedule, and in direct response to user input events.
In the meantime, check out some of our other great posts about Android app development here on Envato Tuts+!
Since Android 1.5, application widgets have enabled users to get information, control apps, and perform crucial tasks, all from the comfort of their homescreens.
In this two-part series, I’ll be showing you how to provide a better user experience by adding an application widget to your Android projects.
By the end of the series, you’ll have created a widget that:
Displays multiple sets of data.
Performs a unique action when the user interacts with a specific View within that widget’s layout.
Updates automatically whenever a set period of time has elapsed.
Updates with new data in response to user interaction.
In this first post, we’ll be using Android Studio’s built-in tools to quickly and easily generate all the files required to deliver any Android application widget. We’ll then expand on this foundation to create a widget that retrieves and displays data and responds to onClick events.
What Are Application Widgets?
An application widget is a lightweight, miniature app that typically falls into one of the following categories:
Information widget. A non-scrollable widget that displays important information, such as a weather or clock widget.
Collection widgets. A scrollable widget that displays a series of related elements, such as a gallery of photos or articles from the same publication. Collection widgets are typically backed by a data source, such as an Array or database. A collection widget must include either a ListView, GridView, StackView, or an AdapterViewFlipper.
Control widgets. A widget that acts as a remote control for your application, allowing the user to trigger frequently used functions without necessarily having to launch your application. Applications that play music often provide a widget that lets the user play, pause, and skip tracks directly from their homescreen.
Hybrid widgets. Why restrict yourself to one category, when you can cherry-pick elements from multiple categories? Just be aware that mixing and matching can lead to a confusing user experience, so for the best results you should design your widget with a single category in mind and then add elements from other categories as required. For example, if you wanted to create a widget that displays today’s weather forecast but also allows users to view the forecast for different days and locations, then you should create an information widget and then add the necessary control elements afterwards.
In addition to the above functionality, most widgets respond to onClick events by launching their associated application, similar to an application shortcut, but they can also provide direct access to specific content within that application.
Application widgets must be placed inside an App Widget Host, most commonly the stock Android homescreen, although there are some third-party App Widget Hosts, such as the popular Nova Launcher and Apex Launcher.
Throughout this series, I’ll be talking about widgets as something you place on the homescreen, but if you have a vague recollection of being able to place widgets on the lockscreen, then this wasn’t just some kind of wonderful dream! Between API levels 17 and 20, it was possible to place widgets on the homescreen or the lockscreen.
Since lockscreen widgets were deprecated in API level 21, in this series we’ll be creating a widget for the homescreen only.
Why Should I Create an Application Widget?
There are several reasons why you should consider adding an application widget to your latest Android project.
Easy Access to Important Information and Features
Widgets allow the user to view your app’s most important information, directly from their homescreen. For example, if you’ve developed a calendar app then you might create a widget that displays details about the user’s next appointment. This is far more convenient than forcing the user to launch your app and potentially navigate multiple screens, just to retrieve the same information.
If you develop a control widget (or a hybrid widget with control elements) then the user can also complete tasks directly from their homescreen. Continuing with our calendar example, your widget might allow the user to create, edit and cancel appointments, potentially without even having to launch your app. This has the potential to remove multiple navigation steps from some of your app’s most important tasks, which can only have a positive impact on the user experience!
Direct Access to All of Your App’s Most Important Screens
Tapping a widget typically takes the user to the top level of the associated application, similar to an application shortcut. However, unlike app shortcuts, widgets can link to specific areas within the associated application. For example, tapping a widget’s New email received notification might launch the application with the new message already selected, while tapping Create new email might take them directly to your app’s ComposeEmail Activity.
By embedding multiple links in your widget’s layout, you can provide convenient, one-tap access to all of your app’s most important Activities.
Create a Loyal, Engaged User Base
As the whole Pokemon Go explosion and subsequent drop-off proved, getting a ton of people to download your app doesn’t automatically guarantee a loyal user base who will still be using your app days, weeks, or even months down the line.
Mobile users are a pretty fickle bunch, and with the memory available on your typical Android smartphone or tablet increasing all the time, it’s easy to lose track of the apps you’ve installed on your device. Chances are, if you pick up your Android smartphone or tablet now and swipe through the app drawer then you’ll discover at least one application that you’ve completely forgotten about.
By creating a widget that showcases all of your app’s most valuable information and features, you ensure that each time the user glances at their homescreen they’re reminded not only that your app exists, but also that it has some great content.
Adding an App Widget to Your Project
Even the most basic widget requires multiple classes and resources, but when you create a widget using Android Studio’s built-in tools, all of these files are generated for you. Since there’s no point in making Android development any harder than it needs to be, we’ll be using these tools to get a head-start on building our widget.
An application widget must always be tied to an underlying app, so create a new Android project with the settings of your choice.
Once Android Studio has built your project, select File > New > Widget > AppWidget from the Android Studio toolbar. This launches a Configure Component menu where you can define some of your widget’s initial settings.
Most of these options are pretty self-explanatory, but there are a few that are worth exploring in more detail.
Resizable (API 12+)
If a widget is resizable, then the user can increase or decrease the number of “cells” it occupies on their homescreen, by long-pressing the widget and then dragging the blue handles that appear around its outline.
Wherever possible, you should give your widget the ability to resize horizontally and vertically, as this will help your widget adapt to a range of screen configurations and homescreen setups. If a user has a seriously cluttered homescreen, then it may be impossible for your widget to even fit onto that homescreen, unless your widget is resizable.
If you do want to create a non-resizable widget, then open the Resizable dropdown menu and select either Only horizontally, Only vertically, or Not resizable.
Minimum Width and Height
The minimum width and height specifies the number of cells your widget will initially occupy when it’s placed on the homescreen.
For resizable widgets, this is the smallest the user can size your widget, so you can use these values to prevent users from shrinking your widget to the point where it becomes unusable.
If your widget isn’t resizable, then the minimum width and height values are your widget’s permanent width and height.
To increase a widget’s chances of fitting comfortably across a range of homescreens, it’s recommended that you never use anything larger than 4 by 4 for the minimum width and height values.
While the exact width and height of a homescreen “cell” vary between devices, you can get a rough estimate of how many DPIs (dots per inch) your widget will occupy using the following formula:
70 × number of cells -30
For example, if your widget is 2 x 3 cells:
70 x 2 - 30 = 110
70 x 3 - 30 = 180
This widget will occupy around 110 x 180 DPIs on the user’s homescreen. If these values don’t align with the dimensions of a particular device’s cells, then Android will automatically round your widget to the nearest cell size.
Review all the options in this menu and make any desired changes (I’m sticking with the defaults) and then click Finish.
Android Studio will now generate all the files and resources required to deliver a basic application widget. This widget isn’t exactly exciting (it’s basically just a blue block with the word Example written across it) but it is a functional widget that you can test on your device.
To test the widget:
Install your project on a physical Android device or AVD (Android Virtual Device).
Launch Android’s Widget Pickerby pressing any empty space on the homescreen, and then tapping the word Widget that appears towards the bottom of the screen.
Swipe through the Widget Picker until you spot the blue Example widget.
Press down on this widget to drop it onto your homescreen.
Enter resize mode by pressing the widget until a set of blue handles appear, and then drag these handles to increase or decrease the number of cells that this widget occupies.
Exploring the Application Widget Files
This widget might not do all that much, but it includes all the classes and resources that we’ll be working on throughout the rest of this series, so let’s take a look at these files and the role they play in delivering an application widget.
NewAppWidget.java
The widget provider is a convenience class containing the methods used to programmatically interface with a widget via broadcast events. Under the hood, a widget is essentially just a BroadcastReceiver that can respond to various actions, such as the user placing a new widget instance on their homescreen.
Most notably, the app widget provider is where you’ll define your widget’s lifecycle methods, which either get called for every instance of the widget or for specific instances only.
Although we tend to think of a widget as a single entity that the user places on their homescreen once, there’s nothing to prevent them from creating multiple instances of the same widget. Maybe your widget is customisable, to the point where different instances can have significantly different functionality, or maybe the user just loves your widget so much that they want to plaster it all over their homescreen!
Let’s take a look at the different lifecycle methods that you can implement in the widget provider class:
The onReceive Event
Android calls the onReceive() method on the registered BroadcastReceiver whenever the specified event occurs.
You typically won’t need to implement this method manually, as the AppWidgetProvider class automatically filters all widget broadcasts and delegates operations to the appropriate methods.
The onEnabled Event
The onEnabled() lifecycle method is called in response to ACTION_APPWIDGET_ENABLED, which is broadcast when the user adds the first instance of your widget to their homescreen. If the user creates two instances of your widget, then onEnabled() is called for the first instance, but not for the second.
This lifecycle method is where you perform any setup that only needs to occur once for all widget instances, such as creating a database or setting up a service.
Note that if the user deletes all instances of your widget from their device and then creates a new instance, then this is classed as the first instance, and consequently the onEnabled() method will be called once again.
The onAppWidgetOptionsChanged Event
This lifecycle method is called in response to ACTION_APPWIDGET_OPTIONS_CHANGED, which is broadcast when a widget instance is created and every time that widget is resized. You can use this method to reveal or hide content based on how the user sizes your widget, although this callback is only supported in Android 4.1 and higher.
The onUpdate Event
The onUpdate() lifecycle method is called every time:
The update interval has elapsed.
The user performs an action that triggers the onUpdate() method.
The user places a new instance of the widget on their homescreen (unless your widget contains a configuration Activity, which we’ll be covering in part two).
The onUpdate() lifecycle method is also called in response to ACTION_APPWIDGET_RESTORED, which is broadcast whenever a widget is restored from backup.
For most projects, the onUpdate() method will contain the bulk of the widget provider code, especially since it’s also where you register your widget’s event handlers.
The onDeleted Event
The onDeleted() method is called every time an instance of your widget is deleted from the App Widget Host, which triggers the system’s ACTION_APPWIDGET_DELETED broadcast.
The onDisabled Event
This method is called in response to the ACTION_APPWIDGET_DISABLED broadcast, which is sent when the last instance of your widget is removed from the App Widget Host. For example, if the user created three instances of your widget, then the onDisabled() method would only be called when the user removes the third and final instance from their homescreen.
The onDisabled() lifecycle method is where you should clean up any resources you created in onEnabled(), so if you set up a database in onEnabled() then you’ll delete it in onDisabled().
The onRestored Event
The onRestored() method is called in response to ACTION_APPWIDGET_RESTORED, which is broadcast whenever an instance of an application widget is restored from backup. If you want to maintain any persistent data, then you’ll need to override this method and remap the previous AppWidgetIds to the new values, for example:
If you open the NewAppWidget.java file that Android Studio generated automatically, then you’ll see that it already contains implementations for some of these widget lifecycle methods:
import android.appwidget.AppWidgetManager;
import android.appwidget.AppWidgetProvider;
import android.content.Context;
import android.widget.RemoteViews;
//All widgets extend the AppWidgetProvider class//
public class NewAppWidget extends AppWidgetProvider {
static void updateAppWidget(Context context, AppWidgetManager appWidgetManager,
int appWidgetId) {
CharSequence widgetText = context.getString(R.string.appwidget_text);
//Load the layout resource file into a RemoteViews object//
RemoteViews views = new RemoteViews(context.getPackageName(), R.layout.new_app_widget);
views.setTextViewText(R.id.appwidget_text, widgetText);
//Tell the AppWidgetManager about the updated RemoteViews object//
appWidgetManager.updateAppWidget(appWidgetId, views);
}
//Define the onUpdate lifecycle method//
@Override
public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) {
//appWidgetIds is an array of IDs that identifies every instance of your widget, so this
//particular onUpdate() method will update all instances of our application widget//
for (int appWidgetId : appWidgetIds) {
updateAppWidget(context, appWidgetManager, appWidgetId);
}
}
@Override
//Define the onEnabled lifecycle method//
public void onEnabled(Context context) {
//To do//
}
@Override
//Define the onDisabled method//
public void onDisabled(Context context) {
//To do//
}
}
The Widget Layout File
The res/layout/new_app_widget.xml file defines our widget’s layout, which is currently just a blue background with the word Example written across it.
The major difference between creating a layout for an Activity and creating a layout for a widget is that widget layouts must be based on RemoteViews, as this allows Android to display the layout in a process outside of your application (i.e. on the user’s homescreen).
RemoteViews don’t support every kind of layout or View, so when building a widget layout, you’re limited to the following types:
AnalogClock
Button
Chromometer
FrameLayout
GridLayout
ImageButton
ImageView
LinearLayout
ProgressBar
RelativeLayout
TextView
ViewStub
If you’re creating a collection widget, then you can also use the following types when your application is installed on Android 3.0 and higher:
AdapterViewFlipper
GridView
ListView
StackView
ViewFlipper
Subclasses and descendants of the above Views and classes are not supported.
Clicks and Swipes
To ensure users don’t accidentally interact with a widget while they’re navigating around their homescreen, widgets respond to onClick events only.
The exception is when the user removes a widget by dragging it towards their homescreen’s Uninstall action, as in this situation your widget will respond to the vertical swipe gesture. However, since this interaction is managed by the Android system, you don’t need to worry about implementing vertical swipe support in your application.
The Widget Info File
The res/xml/new_app_widget_info.xml file (also known as the AppWidgetProviderInfo file) defines a number of widget properties, including many of the settings you selected in Android Studio’s Configure Component menu, such as your widget’s minimum dimensions and whether it can be placed on the lockscreen.
The configuration file also specifies how frequently your widget requests new information from the App Widget update service. Deciding on this frequency requires you to strike a tricky balance: longer update intervals will help conserve the device’s battery, but place your intervals too far apart and your widget may display noticeably out-of-date information.
You should also be aware that the system will wake a sleeping device in order retrieve new information, so although updating your widget once every half an hour may not sound excessive, it could result in your widget waking the device once every 30 minutes, which is going to affect battery consumption.
If you open your project’s new_app_widget_info.xml file, then you’ll see that it already defines a number of widget properties, including the update interval.
<?xml version="1.0" encoding="utf-8"?><appwidget-provider xmlns:android="http://schemas.android.com/apk/res/android"
//The layout your widget should use when it’s placed on the lockscreen on supported devices//
android:initialKeyguardLayout="@layout/new_app_widget"
//The layout your widget should use when it’s placed on the homescreen//
android:initialLayout="@layout/new_app_widget"
//The minimum space your widget consumes, which is also its initial size//
android:minHeight="40dp"
android:minWidth="40dp"
//The drawable that represents your widget in the Widget Picker//
android:previewImage="@drawable/example_appwidget_preview"
//Whether the widget can be resized horizontally, vertically, or along both axes, on Android 3.1 and higher//
android:resizeMode="horizontal|vertical"
//How frequently your widget should request new information from the app widget provider//
android:updatePeriodMillis="86400000"
//Whether the widget can be placed on the homescreen, lockscreen (“keyguard”) or both.//
//On Android 5.0 and higher, home_screen is the only valid option//
android:widgetCategory="home_screen"></appwidget-provider>
If you do give your users the option of placing your widget on the lockscreen, then bear in mind that the widget’s contents will be visible to anyone who so much as glances at the lockscreen. If your “default” layout contains any personal or potentially sensitive information, then you should provide an alternative layout for your widget to use when it’s placed on the lockscreen.
The res/values/dimens.xml File
Widgets don’t look their best when they’re pressed against one another, or when they extend to the very edge of the homescreen.
Whenever your widget is displayed on Android 4.0 or higher, the Android operating system automatically inserts some padding between the widget frame and the bounding box.
If your app winds up on a device that’s running anything earlier than Android 4.0, then your widget needs to supply this padding itself.
When you create a widget using the File > New > Widget > AppWidget menu, Android Studio generates two dimens.xml files that guarantee your widget always has the correct padding, regardless of the version of Android it’s installed on.
You’ll find both of these files in your project’s res folder:
res/values/dimens.xml
This file defines the 8dpi of padding that your widget needs to provide whenever it’s installed on API level 13 or earlier.
<dimen name="widget_margin">8dp</dimen>
res/values-v14/dimens.xml
Since Android 4.0 and higher automatically applies padding to every widget, any padding that your widget provides will be in addition to this default padding.
To ensure your widget aligns with any app icons or other widgets that the user has placed on their homescreen, this dimens.xml file specifies that your widget should provide no additional margins for Android 4.0 and higher:
<dimen name="widget_margin">0dp</dimen>
This default margin helps to visually balance the homescreen, so you should avoid modifying it—you don’t want your widget to be the odd one out, after all!
Your widget’s layout already references this dimension value (android:padding="@dimen/widget_margin") so be careful not to change this line while working on your widget’s layout.
Although these dimens.xml files are the easiest way of ensuring your widget always has the correct padding, if this technique isn’t suitable for your particular project, then one alternative is to create multiple nine-patch backgrounds with different margins for API level 14 and higher, and API level 13 and lower. You can create nine-patches using Android Studio’s Draw 9-patch tool, or with a dedicated graphics editing program such as Adobe Photoshop.
The Project Manifest
In your project’s AndroidManifest.xml file, you need to register your widget as a BroadcastReceiver and specify the widget provider and the AppWidgetProviderInfo file that this widget should use.
If you open the manifest, you’ll see that Android Studio has already added all this information for you.
//The widget’s AppWidgetProvider; in this instance that’s NewAppWidget.java//
<receiver android:name=".NewAppWidget"><intent-filter>
//An intent filter for the android.appwidget.action.APPWIDGET_UPDATE action//
<action android:name="android.appwidget.action.APPWIDGET_UPDATE" /></intent-filter><meta-data
android:name="android.appwidget.provider"
//The widget’s AppWidgetProviderInfo object//
android:resource="@xml/new_app_widget_info" /></receiver></application>
Widget Picker Resource
The res/drawable/example_appwidget_preview.png file is the drawable resource that represents your widget in the Widget Picker.
To encourage users to select your widget from all the available options, this drawable should show your widget, properly configured on a homescreen and displaying lots of useful content.
When you create a widget using the File > New > Widget > AppWidget menu, Android Studio generates a preview drawable automatically (example_appwidget_preview.png).
In part two, I’ll be showing you how to quickly and easily replace this stock drawable, by using Android Studio’s built-in tools to generate your own preview image.
Building Your Layout
Now we have an overview of how these files come together to create an application widget, let’s expand on this foundation and create a widget that does more than just display the word Example on a blue background!
We’ll be adding the following functionality to our widget:
A TextView that displays an Application Widget ID label.
A TextView that retrieves and displays the ID for this particular widget instance.
A TextView that responds to onClick events by launching the user’s default browser and loading a URL.
While we could simply drag three TextViews from the Android Studio palette and drop them onto the canvas, if your widget looks good then users will be more likely to place it on their homescreen, so let’s create some resources that’ll give our widget extra visual appeal.
Create the Widget’s Background
I’m going to create a rectangle with rounded corners, a gradient background, and a border, which I’ll be using as the background for my widget:
Control-click your project’s drawable folder and select New > Drawable resource file.
Finally, open the strings.xml file and define the string resources that we referenced in our layout:
<resources><string name="app_name">Widget</string><string name="widget_id">App Widget ID\u0020</string><string name="URL">Tap to launch URL</string></resources>
Android Studio’s Design tab helps you work more efficiently, by previewing how your layout will render across a range of devices. Switching to the Design tab is far easier than running your project on an Android device every single time you make a change to your layout.
Frustratingly, Android Studio doesn’t supply a dedicated widget skin, so by default your widget’s layout is rendered just like a regular Activity, which doesn’t provide the best insight into how your widget will look on the user’s homescreen.
One potential workaround is to render your layout using the Android Wear (Square) skin, which is comparable to the size and shape of an Android application widget:
Make sure Android Studio’s Device tab is selected.
Open the Device dropdown.
Select 280 x 280, hdpi (Square) from the dropdown menu.
Create the Widget Functionality
Now that our widget looks the part, it’s time to give it some functionality:
Retrieve and display data. Every instance of a widget is assigned an ID when it’s added to the App Widget Host. This ID persists across the widget’s lifecycle and will be completely unique to that widget instance, even if the user adds multiple instances of the same widget to their homescreen.
Add an action. We’ll create an OnClickListener that launches the user’s default browser and loads a URL.
Open the widget provider file (NewAppWidget.java) and delete the line that retrieves the appwidget_text string resource:
static void updateAppWidget(Context context, AppWidgetManager appWidgetManager,
int appWidgetId) {
//Delete the following line//
CharSequence widgetText = context.getString(R.string.appwidget_text);
RemoteViews views = new RemoteViews(context.getPackageName(), R.layout.new_app_widget);
views.setTextViewText(R.id.appwidget_text, widgetText);
appWidgetManager.updateAppWidget(appWidgetId, views);
}
In the updateAppWidget block, we now need to update the R.id.id_value placeholder with the widget’s unique ID:
RemoteViews views = new RemoteViews(context.getPackageName(), R.layout.new_app_widget);
views.setTextViewText(R.id.id_value, String.valueOf(appWidgetId));
We also need to create an Intent object containing the URL that should load whenever the user interacts with this TextView.
Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse("https://code.tutsplus.com/"));
PendingIntent pendingIntent = PendingIntent.getActivity(context, 0, intent, 0);
//Attach an OnClickListener to our “launch_url” button, using setOnClickPendingIntent//
views.setOnClickPendingIntent(R.id.launch_url, pendingIntent);
Here’s the complete widget provider file:
import android.appwidget.AppWidgetManager;
import android.appwidget.AppWidgetProvider;
import android.content.Context;
import android.widget.RemoteViews;
import android.app.PendingIntent;
import android.content.Intent;
import android.net.Uri;
public class NewAppWidget extends AppWidgetProvider {
static void updateAppWidget(Context context,
AppWidgetManager appWidgetManager,
int appWidgetId) {
//Instantiate the RemoteViews object//
RemoteViews views = new RemoteViews(context.getPackageName(), R.layout.new_app_widget);
//Update your app’s text, using the setTextViewText method of the RemoteViews class//
views.setTextViewText(R.id.id_value, String.valueOf(appWidgetId));
//Register the OnClickListener//
Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse("https://code.tutsplus.com/"));
PendingIntent pendingIntent = PendingIntent.getActivity(context, 0, intent, 0);
views.setOnClickPendingIntent(R.id.launch_url, pendingIntent);
appWidgetManager.updateAppWidget(appWidgetId, views);
}
@Override
public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) {
//Update all instances of this widget//
for (int appWidgetId : appWidgetIds) {
updateAppWidget(context, appWidgetManager, appWidgetId);
}
}
}
Testing the Widget
It’s time to put this widget to the test!
Install the updated project on your Android device.
To ensure you’re seeing the latest version of this widget, remove any existing widget instances from your homescreen.
Press any empty section of the homescreen, and then select your widget from the Widget Picker.
Reposition and resize the widget as desired.
Check that the widget responds to user input events, by selecting the Tap to launch URLTextView. The application widget should respond by launching your default browser and loading a URL.
If you’ve been following along with this tutorial, then at this point you have a fully functioning widget that demonstrates many of the core concepts of Android application widgets. You can also download the finished project from our GitHub repo.
Conclusion
In this post we examined all the files required to deliver an Android application widget, before building a widget that retrieves and displays some unique data and responds to user input events.
Currently, there’s one major piece of functionality still missing from our widget: it never displays any new information! In the next post, we’ll give this widget the ability to retrieve and display new data automatically, based on a set schedule, and in direct response to user input events.
In the meantime, check out some of our other great posts about Android app development here on Envato Tuts+!
In these tutorials, I'll show you how to create and interact with a GraphQL database using AWS AppSync and React Native. This app will have real-time and offline functionality, something we get out of the box with AppSync. In this post we'll get started by setting up the back-end with AppSync.
A great thing about AppSync is that it uses GraphQL—an open standard and a powerful new paradigm for the web and mobile back-end. If you want to learn more about GraphQL, how it differs from REST APIs, and how it can make your job as an app developer easier, check out some of our GraphQL content here on Envato Tuts+.
GraphQL is designed to work with data represented by a graph, and it has a powerful query syntax for traversing, retrieving, and mutating data. Learn how to...
In these posts, we will be building a travel app called Cities. Have you ever been watching a show on your favorite food channel and seen an awesome food truck, or spoken to a friend that just got back from a trip and was really excited about the Owl Bar she visited? Well fret no more, we will be building an app for you to keep up with all of those cool places you want to visit, as well as the cities where they are located.
This app will demonstrate and implement all of the functionality you will need to build a real-world, full-stack React Native and GraphQL application.
AppSync offers an easy way to get up and running with a scalable, real-time GraphQL server without having to create and maintain it all on your own.
Within the AppSync console, we will do everything from creating our GraphQL schema to provisioning our database and resolvers. The console also has Graphiql set up so we can test and query our database without any extra setup.
We will implement this configuration on our client, which will give us a seamless way to interact with our GraphQL endpoint!
AppSync will allow you to use one of three resolver types right out of the box: DynamoDB, Elasticsearch, or AWS Lambda. We will be using DynamoDB in this tutorial.
Getting Started
The first thing we need to do is create a new AppSync application and add our basic Schema.
Our application will need to store two sets of data—a list of cities and a list of locations that we will associate with individual cities within the list—so our schema will have two main data types (City & Location).
To get started with AppSync, go to the AWS Console and choose AWS AppSync within the Services dropdown menu.
Once we are in the in the AppSync dashboard, we need to click the Create API button:
Now, we will have the option to give the application a name (I'm calling mine TravelApp), and choose the type of schema (custom or sample). We will choose the custom schema option, and then click Create.
The next screen will be the dashboard for the new application. We'll see some useful information right away, including the URL for our app as well as the authorization mode. On the left side, you will see a few links: Schema, Queries, DataSources, and Settings.
Have a look around at the options here before you move on to the next step.
Creating a Schema and Provisioning a Data Source
The next thing we will do is create the schema we would like to use for our application. Again, the schema will have a City and Location type to start off.
From the editor, click on the the Schema tab, and create the following basic schema with two types and one query and click Save:
type City {
id: ID!
name: String!
country: String!
locations: [Location]
}
type Location {
id: ID!
cityId: ID!
name: String!
info: String
}
type Query {
fetchCity(id: ID!): City
}
schema {
query: Query
}
Attach the Schema to a Database
Now that we have a basic schema created, we need to attach this schema to a database!
AppSync makes this extremely easy. Click the Create Resources button at the right of the screen. We will need two database tables: one to hold our cities, and another to hold our locations.
Choose City, accept all of the defaults, and click Create. You'll notice that this will automatically add some useful queries, mutations, and subscriptions to our schema!
Go ahead and do the same for the Location resource. We have now successfully created two database tables that go along with our schema, and also some basic queries, mutations, subscriptions, and resolvers that will map the schema to those tables (we'll explore the resolvers in the next section).
Let's now take a look at what was created. In the left-hand menu, click on Data Sources.
You should now see the two data sources we just created!
Run Some Test Queries
Now that we have new Mutations and Subscriptions created in our Schema, let's add them to our Schema definition.
To do so, scroll to the bottom of the Schema and update the schema definition to the following:
This will add a record for Seattle to the city table, with an id of 00001.
Then, create a query to retrieve that data:
query getCity {
getCity(id: "00001") {
id
name
country
}
}
When you click the orange play button, you can choose to execute the createCity mutation or the getCity query. Run them both and you should see the Seattle city data retrieved and output on the right side of the screen.
If you want to see how this data is represented in the database, you can explore the DynamoDB city table linked from the Data Sources tab.
Resolver Mapping Templates
You may be wondering how the query maps to the database so seamlessly. The answer is resolvers!
If you look at the right-hand side of the AppSync dashboard's Schema tab, you'll see a section titled Data Types. This lists all of the data types within our Schema. To the right of each field, we see a heading labeled Resolver.
Resolvers are basically the interface between the schema and the database that we are currently using. We can use resolvers for everything from basic retrieval of items to complex actions like fine-grained access control.
Resolvers are written in a DSL called Velocity Templating Language (VTL). AppSync will automatically provision basic resolvers upon datasource creation, but they are highly configurable. At this point, we don't really need to change a lot in our resolvers, but let's take a look at three of the main types of resolvers you'll probably need to work with in the real world. These are connected to the following basic operations:
Getting a single item by its id
Getting a list of items
Putting an item into the database
Getting an Item by Id
In the Data Types tab, next to the schema definitions, find getCity under Query, and click on CityTable.
This should take you to the resolver configuration screen. From this screen, you'll see that there are three main pieces to a resolver:
Data source name
Request mapping template
Response mapping template
The data source is the table that you would like to interact with.
The request mapping template describes how the database will handle the request.
Here, you can write your own mapping template or choose from a selection of prepopulated templates for basic actions like getting or putting an item, among other things.
Here, you see the template for getting an item.
The response mapping template describes how to handle the response from the database.
In our response template, we are basically just returning the context.result and wrapping it in the $utils.toJson utility function. This is just one of many helper utils that will abstract away some of the VTL boilerplate. See the complete list of utility methods in the official documentation.
As your application becomes more complex, the key to getting proficient at AppSync is getting comfortable with working with these mapping templates. It took me a few hours to wrap my head around how it all worked, but after experimenting with it for a short while I could see how powerful it is.
We have completed our schema, but we have one last step before we can begin interacting with our new GraphQL endpoint from our React Native application!
Because we are going to be storing all of our locations in one table but querying them based on the city we are currently viewing, we will need to create a secondary index to allow us to efficiently query locations with a particular cityId.
To create a secondary index, go to Data Sources and click on the Location Table hyperlink.
This should take you to the DynamoDB table view for the Location Table. Here, click the Indexes tab and create a new index with a partition key of cityId.
You can lower the read and write capacity units to 1 for the purposes of this tutorial.
Next, we need to update our listLocations query to accept this cityId as an argument, so update the query for listLocations to the following:
type Query {
// all previous queries omitted
listLocations(cityId: ID!, first: Int, after: String): LocationConnection
}
Now, we need to update our listLocations resolver to use this new cityId index! Remember, we really only want listLocations to return an array of locations for the city that we are looking at, so the listLocations resolver will take the cityId as a parameter and only return locations for that city.
To get this working, let's update the request mapping template for listLocations to be the following:
In this tutorial, we've created the back-end for a React Native app with its own GraphQL endpoint. We also looked at how to create and update resolvers and work with the AppSync schema.
Now that we are finished configuring everything in the console, we can go ahead and create our React Native client! Stay tuned for the next post, where I dive into the React Native mobile app and show you how to hook React Native up to AppSync.
In the meantime, check out some of our other posts about React Native app development!
In these tutorials, I'll show you how to create and interact with a GraphQL database using AWS AppSync and React Native. This app will have real-time and offline functionality, something we get out of the box with AppSync. In this post we'll get started by setting up the back-end with AppSync.
A great thing about AppSync is that it uses GraphQL—an open standard and a powerful new paradigm for the web and mobile back-end. If you want to learn more about GraphQL, how it differs from REST APIs, and how it can make your job as an app developer easier, check out some of our GraphQL content here on Envato Tuts+.
GraphQL is designed to work with data represented by a graph, and it has a powerful query syntax for traversing, retrieving, and mutating data. Learn how to...
In these posts, we will be building a travel app called Cities. Have you ever been watching a show on your favorite food channel and seen an awesome food truck, or spoken to a friend that just got back from a trip and was really excited about the Owl Bar she visited? Well fret no more, we will be building an app for you to keep up with all of those cool places you want to visit, as well as the cities where they are located.
This app will demonstrate and implement all of the functionality you will need to build a real-world, full-stack React Native and GraphQL application.
AppSync offers an easy way to get up and running with a scalable, real-time GraphQL server without having to create and maintain it all on your own.
Within the AppSync console, we will do everything from creating our GraphQL schema to provisioning our database and resolvers. The console also has Graphiql set up so we can test and query our database without any extra setup.
We will implement this configuration on our client, which will give us a seamless way to interact with our GraphQL endpoint!
AppSync will allow you to use one of three resolver types right out of the box: DynamoDB, Elasticsearch, or AWS Lambda. We will be using DynamoDB in this tutorial.
Getting Started
The first thing we need to do is create a new AppSync application and add our basic Schema.
Our application will need to store two sets of data—a list of cities and a list of locations that we will associate with individual cities within the list—so our schema will have two main data types (City & Location).
To get started with AppSync, go to the AWS Console and choose AWS AppSync within the Services dropdown menu.
Once we are in the in the AppSync dashboard, we need to click the Create API button:
Now, we will have the option to give the application a name (I'm calling mine TravelApp), and choose the type of schema (custom or sample). We will choose the custom schema option, and then click Create.
The next screen will be the dashboard for the new application. We'll see some useful information right away, including the URL for our app as well as the authorization mode. On the left side, you will see a few links: Schema, Queries, DataSources, and Settings.
Have a look around at the options here before you move on to the next step.
Creating a Schema and Provisioning a Data Source
The next thing we will do is create the schema we would like to use for our application. Again, the schema will have a City and Location type to start off.
From the editor, click on the the Schema tab, and create the following basic schema with two types and one query and click Save:
type City {
id: ID!
name: String!
country: String!
locations: [Location]
}
type Location {
id: ID!
cityId: ID!
name: String!
info: String
}
type Query {
fetchCity(id: ID!): City
}
schema {
query: Query
}
Attach the Schema to a Database
Now that we have a basic schema created, we need to attach this schema to a database!
AppSync makes this extremely easy. Click the Create Resources button at the right of the screen. We will need two database tables: one to hold our cities, and another to hold our locations.
Choose City, accept all of the defaults, and click Create. You'll notice that this will automatically add some useful queries, mutations, and subscriptions to our schema!
Go ahead and do the same for the Location resource. We have now successfully created two database tables that go along with our schema, and also some basic queries, mutations, subscriptions, and resolvers that will map the schema to those tables (we'll explore the resolvers in the next section).
Let's now take a look at what was created. In the left-hand menu, click on Data Sources.
You should now see the two data sources we just created!
Run Some Test Queries
Now that we have new Mutations and Subscriptions created in our Schema, let's add them to our Schema definition.
To do so, scroll to the bottom of the Schema and update the schema definition to the following:
This will add a record for Seattle to the city table, with an id of 00001.
Then, create a query to retrieve that data:
query getCity {
getCity(id: "00001") {
id
name
country
}
}
When you click the orange play button, you can choose to execute the createCity mutation or the getCity query. Run them both and you should see the Seattle city data retrieved and output on the right side of the screen.
If you want to see how this data is represented in the database, you can explore the DynamoDB city table linked from the Data Sources tab.
Resolver Mapping Templates
You may be wondering how the query maps to the database so seamlessly. The answer is resolvers!
If you look at the right-hand side of the AppSync dashboard's Schema tab, you'll see a section titled Data Types. This lists all of the data types within our Schema. To the right of each field, we see a heading labeled Resolver.
Resolvers are basically the interface between the schema and the database that we are currently using. We can use resolvers for everything from basic retrieval of items to complex actions like fine-grained access control.
Resolvers are written in a DSL called Velocity Templating Language (VTL). AppSync will automatically provision basic resolvers upon datasource creation, but they are highly configurable. At this point, we don't really need to change a lot in our resolvers, but let's take a look at three of the main types of resolvers you'll probably need to work with in the real world. These are connected to the following basic operations:
Getting a single item by its id
Getting a list of items
Putting an item into the database
Getting an Item by Id
In the Data Types tab, next to the schema definitions, find getCity under Query, and click on CityTable.
This should take you to the resolver configuration screen. From this screen, you'll see that there are three main pieces to a resolver:
Data source name
Request mapping template
Response mapping template
The data source is the table that you would like to interact with.
The request mapping template describes how the database will handle the request.
Here, you can write your own mapping template or choose from a selection of prepopulated templates for basic actions like getting or putting an item, among other things.
Here, you see the template for getting an item.
The response mapping template describes how to handle the response from the database.
In our response template, we are basically just returning the context.result and wrapping it in the $utils.toJson utility function. This is just one of many helper utils that will abstract away some of the VTL boilerplate. See the complete list of utility methods in the official documentation.
As your application becomes more complex, the key to getting proficient at AppSync is getting comfortable with working with these mapping templates. It took me a few hours to wrap my head around how it all worked, but after experimenting with it for a short while I could see how powerful it is.
We have completed our schema, but we have one last step before we can begin interacting with our new GraphQL endpoint from our React Native application!
Because we are going to be storing all of our locations in one table but querying them based on the city we are currently viewing, we will need to create a secondary index to allow us to efficiently query locations with a particular cityId.
To create a secondary index, go to Data Sources and click on the Location Table hyperlink.
This should take you to the DynamoDB table view for the Location Table. Here, click the Indexes tab and create a new index with a partition key of cityId.
You can lower the read and write capacity units to 1 for the purposes of this tutorial.
Next, we need to update our listLocations query to accept this cityId as an argument, so update the query for listLocations to the following:
type Query {
// all previous queries omitted
listLocations(cityId: ID!, first: Int, after: String): LocationConnection
}
Now, we need to update our listLocations resolver to use this new cityId index! Remember, we really only want listLocations to return an array of locations for the city that we are looking at, so the listLocations resolver will take the cityId as a parameter and only return locations for that city.
To get this working, let's update the request mapping template for listLocations to be the following:
In this tutorial, we've created the back-end for a React Native app with its own GraphQL endpoint. We also looked at how to create and update resolvers and work with the AppSync schema.
Now that we are finished configuring everything in the console, we can go ahead and create our React Native client! Stay tuned for the next post, where I dive into the React Native mobile app and show you how to hook React Native up to AppSync.
In the meantime, check out some of our other posts about React Native app development!
One of the first things users will want to do with a new smart home device is get it on their wireless network. Many IoT devices lack a screen or keyboard, so one way to do this is by allowing users to pair a smartphone to the device so that they can control and configure the device. This is how Nest and Google Home work, among others, and the Nearby Connections 2.0 API makes it possible.
In this article you'll get an introduction to the Nearby Connections 2.0 API and how it can be used to pair an Android smartphone to an Android Things device in order to provide your users with a companion device experience.
What Is the Nearby Connections API?
The Nearby Connections API allows two devices to communicate with each other directly over Bluetooth or wireless without the use of a centralized access point. There are two roles that a device may take on: advertiser, which lets other devices know that it is available to be connected to, and discoverer, which attempts to find advertisers and connect to them. Once a set of devices (also known as "endpoints" at this stage) have connected together, they may send data to any other endpoint on the Nearby Connections network.
There are two strategies that the Nearby Connections API can use for connecting devices together. The first, P2P_STAR, is the simplest to work with. It consists of one advertiser that can support multiple discoverers connecting to it. The second, P2P_CLUSTER, allows any number of devices to connect to, and accept connections from, any other number of devices. This creates a mesh network with a less centralized point of failure, though it also takes up more bandwidth. This strategy is ideal for smaller payloads that do not need to go through a central device, such as for games.
This tutorial will focus on using the simpler star strategy to connect the IoT device as an advertiser and will use the user’s smartphone as a discoverer. However, by the end, you should also have enough information to implement a cluster strategy as well.
Let’s Get Set Up!
There will be two modules for this tutorial: the mobile app and the Android Things app. Once you have created those in Android Studio, you will need to include the Google Play Services dependency for Nearby Connections in the module-level build.gradle file for both apps.
After you have run a gradle sync, open the AndroidManifest.xml files for both modules and include the following permissions within the application nodes.
Android Things devices will have these permissions granted to the device after rebooting, though you will need to request the location permission from users on the phone app.
The MainActivity class in both the things and mobile modules will need to implement the interfaces used for Google Play Services callbacks, like so:
public class MainActivity extends FragmentActivity implements GoogleApiClient.ConnectionCallbacks, GoogleApiClient.OnConnectionFailedListener {
@Override
public void onConnected(@Nullable Bundle bundle) {}
@Override
public void onConnectionSuspended(int i) {}
@Override
public void onConnectionFailed(@NonNull ConnectionResult connectionResult) {}
}
Once you have validated that the user has the proper location permissions in onCreate(), you can begin connecting to Google Play Services to use the Nearby Connections API.
When the GoogleApiClient has finished connecting, the onConnected() method will be called. This is where you will start the advertising or discovery process for your device. In addition, both applications will need a service id, which is a unique String identifier.
private static final String SERVICE_ID = "UNIQUE_SERVICE_ID";
Advertising on Nearby Connections
When working with the Nearby Connections API, you will need to create a ConnectionLifecycleCallback that will, as the name implies, be triggered on various connection lifecycle events. For this demo, we will only use the onConnectionInitiated() method. It will save a reference to the first endpoint that attempts to connect to it, accept the connection, and then stop advertising. If the connection is not successful, the app can restart advertising.
private final ConnectionLifecycleCallback mConnectionLifecycleCallback =
new ConnectionLifecycleCallback() {
@Override
public void onConnectionInitiated(String endpointId, ConnectionInfo connectionInfo) {
endpoint = endpointId;
Nearby.Connections.acceptConnection(mGoogleApiClient, endpointId, mPayloadCallback)
.setResultCallback(new ResultCallback<com.google.android.gms.common.api.Status>() {
@Override
public void onResult(@NonNull com.google.android.gms.common.api.Status status) {
if( status.isSuccess() ) {
//Connection accepted
}
}
});
Nearby.Connections.stopAdvertising(mGoogleApiClient);
}
@Override
public void onConnectionResult(String endpointId, ConnectionResolution result) {}
@Override
public void onDisconnected(String endpointId) {}
};
You may have noticed that the above method also references a PayloadCallback object. This object has methods that are called when a payload of data is sent from the advertiser to an endpoint, as well as when data is received from an endpoint. The onPayloadReceived() method is where we would handle any data send to our Android Things device. This method contains the Payload object that can be turned into an array of bytes, and a String representing the endpoint address of the sending device.
private PayloadCallback mPayloadCallback = new PayloadCallback() {
@Override
public void onPayloadReceived(String endpoint, Payload payload) {
Log.e("Tuts+", new String(payload.asBytes()));
}
@Override
public void onPayloadTransferUpdate(String endpoint, PayloadTransferUpdate payloadTransferUpdate) {}
};
At this point, you can start advertising on your IoT device with the following method:
Nearby.Connections.startAdvertising(
mGoogleApiClient,
"Device Name",
SERVICE_ID,
mConnectionLifecycleCallback,
new AdvertisingOptions(Strategy.P2P_STAR));
You may notice that this is where we apply the P2P_STAR strategy to our Nearby Connections network.
When you want to send a payload to another device, you can use the Nearby.Connections.sendPayload() method with the Google API client reference, the name of your endpoint, and a byte array of the data you would like to send.
One trick that I found useful while working with the Nearby Connections API on an Android Things device is re-enabling WiFi on reboot, as the device can end up with wireless disabled if the device is shut down or loses power while advertising. You can do this by retrieving the WifiManager system service and calling setWifiEnabled().
Discovering a device follows a mostly similar pattern to advertising. The device will connect to the Google API Client and start discovering. When an advertiser is found, the discoverer will request to connect to the advertiser. If the advertiser approves the request, then the two devices will connect and be able to send payloads back and forth. The discoverer will use a PayloadCallback just like the advertiser.
private PayloadCallback mPayloadCallback = new PayloadCallback() {
@Override
public void onPayloadReceived(String s, Payload payload) {
Log.e("Tuts+", new String(payload.asBytes()));
}
@Override
public void onPayloadTransferUpdate(String s, PayloadTransferUpdate payloadTransferUpdate) {}
};
The discoverer's (the mobile app's) ConnectionLifecycleCallback will also look similar to the advertiser's:
private final ConnectionLifecycleCallback mConnectionLifecycleCallback =
new ConnectionLifecycleCallback() {
@Override
public void onConnectionInitiated(String endpointId, ConnectionInfo connectionInfo) {
Nearby.Connections.acceptConnection(mGoogleApiClient, endpointId, mPayloadCallback);
mEndpoint = endpointId;
Nearby.Connections.stopDiscovery(mGoogleApiClient);
}
@Override
public void onConnectionResult(String endpointId, ConnectionResolution result) {}
@Override
public void onDisconnected(String endpointId) {}
};
What is different is that discoverers will require an EndpointDiscoveryCallback that will be used when an advertiser is found but not yet connected to. This object will initiate the request to connect to the advertiser.
private final EndpointDiscoveryCallback mEndpointDiscoveryCallback =
new EndpointDiscoveryCallback() {
@Override
public void onEndpointFound(
String endpointId, DiscoveredEndpointInfo discoveredEndpointInfo) {
if( discoveredEndpointInfo.getServiceId().equalsIgnoreCase(SERVICE_ID)) {
Nearby.Connections.requestConnection(
mGoogleApiClient,
"Name",
endpointId,
mConnectionLifecycleCallback);
}
}
@Override
public void onEndpointLost(String endpointId) {
Log.e("Tuts+", "Disconnected");
}
};
Once your discoverer has connected to Google Play Services, you can initiate discovery with the following command:
Nearby.Connections.startDiscovery(
mGoogleApiClient,
SERVICE_ID,
mEndpointDiscoveryCallback,
new DiscoveryOptions(Strategy.P2P_STAR));
Finally, when you want to disconnect from an advertiser, you can use the disconnectFromEndpoint() method from the Nearby Connections API. It's generally a good idea to do this in your Activity's onDestroy() callback.
In this article you learned about the Nearby Connections 2.0 API for Android in the context of creating a companion app for an Android Things IoT device.
It's worth noting that this API can be used for any Android devices that you would like to network together, from phones and tablets to Android TV boxes and Android Wear smartwatches. The API provides a simple way to connect and communicate without the use of the Internet or a centralized router, and adds a great utility to your collection of tools for Android development.
While you're here, check out some of our other posts on Android Things IoT development!