Quantcast
Channel: Envato Tuts+ Code - Mobile Development
Viewing all 1836 articles
Browse latest View live

Code a Widget for Your Android App: Add a Configuration Activity

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-30669

Application widgets provide your users with easy access to your application’s most frequently used features, while giving your app a presence on the user’s homescreen. By adding a widget to your project, you can provide a better user experience, while encouraging users to remain engaged with your application, as every single time they glance at their homescreen they’ll see your widget, displaying some of your app’s most useful and interesting content.

In this three-part series, we’re building an application widget that has all the functionality you’ll find in pretty much every Android application widget. 

In the first post, we created a widget that retrieves and displays data and performs a unique action in response to onClick events. Then in the second post, we expanded our widget to retrieve and display new data automatically based on a schedule, and in response to user interaction.

We’re picking up right where we left off, so if you don’t have a copy of the widget we created in part one, you can download it from GitHub.

Enhancing Your Widget With a Configuration Activity 

Although our widget functions out of the box, some widgets require initial setup—for example, a widget that displays the user’s messages will require their email address and password. You might also want to give your users the ability to customize the widget, such as changing its colour or even modifying its functionality, like how often the widget updates. 

If your widget is customisable or requires some setup, then you should include a configuration Activity, which will launch automatically as soon as the user places the widget on their homescreen.

Configuration Activities may also come in handy if you have lots of ideas about the information and features that you want to include in your widget. Rather than cramming all this content into a complex and potentially confusing layout, you could provide a configuration Activity where users pick and choose the content that matters to them. 

If you do include a configuration Activity, then don’t get carried away, as there’s a point where choice becomes too much choice. Setting up a widget shouldn’t feel difficult, so it’s recommended that you limit your widget to two or three configuration options. If you’re struggling not to exceed this limit, then consider whether all this choice is really necessary, or whether it’s just adding unnecessary complexity to the setup process.

To create a configuration Activity, you'll need to follow the following steps.

1. Create the Activity Layout

This is exactly the same as building the layout for a regular Activity, so create a new layout resource file and add all your UI elements as normal. 

A configuration Activity is where the user performs initial setup, so once they’ve completed this Activity they’re unlikely to need it again. Since a widget’s layout is already smaller than a regular Activity layout, you shouldn’t waste valuable space by giving users a way to relaunch the configuration Activity directly from the widget layout.

Here I’m creating a simple configuration_activity.xml layout resource file containing a button that, when tapped, will create the application widget. 

2. Create Your Configuration Class 

Your configuration Activity must include the App Widget ID passed by the Intent that launched the configuration Activity. 

If you do include a configuration Activity, then note that the onUpdate() method won’t be called when the user creates a widget instance, as the ACTION_APPWIDGET_UPDATE broadcast isn’t sent when a configuration Activity is launched. It’s the configuration Activity’s responsibility to request this first update directly from the AppWidgetManager. The onUpdate() method will be called as normal for all subsequent updates.

Create a new Java class named configActivity and add the following: 

3. Declare the Configuration Activity in Your Project’s Manifest

When you declare your configuration Activity in the Manifest, you need to specify that it accepts the ACTION_APPWIDGET_CONFIGURE action: 

4. Declare the Configuration Activity in Your AppWidgetProviderInfo File 

Since the configuration Activity is referenced outside of your package scope, you need to declare it using the fully qualified namespace: 

Now, whenever you create a new instance of this widget, the configuration Activity will launch, and you’ll need to complete all the options in this Activity before your widget is created. In this instance, that simply means giving the Create Widget button a tap. 

Test the application widget configuration Activity

Remember, you can download the finished widget, complete with configuration Activity, from GitHub. 

Application Widget Best Practices

Throughout this series, we’ve been building an application widget that demonstrates all the core features you’ll find in pretty much every Android application widget. By this point, you know how to create a widget that retrieves and displays data, reacts to onClick events,  updates with new information based on a schedule and in response to user interaction, and has a custom preview image. 

These may be the fundamental building blocks of Android application widgets, but creating something that just works is never enough—you also need to ensure your widget is providing a good user experience, by following best practices. 

Perform Time-Consuming Operations Off the Main Thread

Widgets have the same runtime restrictions as normal broadcast receivers, so they have the potential to block Android’s all-important main UI thread. 

Android has a single thread where all your application code runs by default, and since Android can only process one task at a time, it’s easy to block this important thread. Perform any kind of long-running or intensive operation on the main thread, and your app’s user interface will be unresponsive until the operation completes. In the worst-case scenario, this can result in Android’s Application Not Responding (ANR) error and the application crashes. 

If your widget does need to perform any time-consuming or labour-intensive tasks, then you should use a service rather than the main thread. You can create the service as normal and then start it in your AppWidgetProvider. For example, here we’re using a service to update our widget:

Create a Flexible Layout

Creating a layout that can adapt to a range of screen configurations is one of the most important rules of developing for Android, and it’s a rule that also extends to widgets. Just like regular Activities, your widget’s layout needs to be flexible enough to display and function correctly regardless of the screen configuration, but there are some additional reasons why widget layouts need to be as flexible as possible. 

Firstly, if your widget is resizeable, then the user can manually increase and decrease the size of your widget, which is something you don’t have to worry about with traditional Activities.

Secondly, there’s no guarantee that your widget’s minWidth and minHeight measurements will perfectly align with a particular device’s homescreen grid. When a widget isn’t a perfect fit, the Android operating system will stretch that widget horizontally and/or vertically to occupy the minimum number of cells required to satisfy the widget’s minWidth and minHeight constraints. While this shouldn’t be a significant increase, it’s still worth noting that right from the start your widget may be occupying more space than you’d anticipated. 

Creating a flexible widget layout follows the same best practices as designing a flexible Activity layout. There’s already plenty of information available on this topic, but here are some of the most important points to bear in mind when creating your widget layouts: 

  • Avoid absolute units of measure, such as defining a button in pixels. Due to varying screen densities, 50 pixels doesn’t translate to the same physical size on every device. So a 50px button is going to appear larger on low-density screens and smaller on high-density screens. You should always specify your layout’s dimensions in density-independent units (dpi) and use flexible elements such as wrap_content and match_parent.
  • Provide alternate resources that are optimised for different screen densities. You can also provide layouts that are optimised for different screen sizes, using configuration qualifiers such as smallestWidth (sw<N>dp). Don’t forget to provide a default version of every resource, so your app has something to fall back on if it ever encounters a screen with characteristics that it doesn’t have a specific resource for. You should design these default resources for normalmedium-density screens. 
  • Test your widget across as many screens as possible, using the Android emulator. When creating an AVD, you can specify an exact screen size and resolution, using the Screen controls that appear in the Configure this hardware profile menu. Don’t forget to test how your widget handles being resized across all these different screens!
Test your widget across a range of AVDs using the emulator

Don’t Wake the Device

Updating a widget requires battery power, and the Android operating system has no qualms about waking a sleeping device in order to perform an update, which amplifies the impact your widget has on the device’s battery. 

If the user realises that your widget is draining their battery, then at the very least they’re going to delete that widget from their homescreen. In the worst-case scenario, they may even delete your app entirely. 

You should always aim to update your widget as infrequently as possible while still displaying timely and relevant information. However, some widgets may have a legitimate reason for requiring frequent updates—for example, if the widget includes highly time-sensitive content. 

In this scenario, you can reduce the impact these frequent updates have on battery life, by performing updates based on an alarm that won’t wake a sleeping device. If this alarm goes off while the device is asleep, then the update won’t be delivered until the next time the device wakes up.

To update based on an alarm, you need to use the AlarmManager to set an alarm with an Intent that your AppWidgetProvider will receive, and then set the alarm type to either ELAPSED_REALTIME or RTC, as these alarm types won’t wake a sleeping device. For example:

If you do use an alarm, then make sure you open your project’s AppWidgetProviderInfo file (res/xml/new_app_widget_info.xml) and set the updatePeriodMillis to zero ("0"). If you forget this step, the updatePeriodMillis value will override the AlarmManager and your widget will still wake the device every single time it requires an update. 

Conclusion

In this three-part series, we’ve created an Android application widget that demonstrates how to implement all the most common features found in app widgets. 

If you’ve been following along since the beginning, then by this point you’ll have built a widget that updates automatically and in response to user input, and that’s capable of reacting to onClick events. Finally, we looked at some best practices for ensuring your widget provides a good user experience, and discussed how to enhance your widget with a configuration Activity.

Thanks for reading, and while you're here, check out some of our other great posts on Android app development!

2018-03-12T14:00:00.000Z2018-03-12T14:00:00.000ZJessica Thornsby

Conversation Design User Experiences for SiriKit and iOS

$
0
0
Final product image
What You'll Be Creating

Introduction

A lot of articles, our site included, have focused on helping readers create amazing iOS apps by designing a great mobile user experience (UX). 

However, with the emergence of the Apple Watch a few years ago, alongside CarKit, and more recently the HomePod this year, we are starting to see a lot more apps and IoT appliances that use voice commands instead of visual interfaces. The prevalence of IoT devices such as the HomePod and other voice assistants, as well as the explosion in voice-assistant enabled third-party apps, has given rise to a whole new category of user experience design methodologies, focusing on Voice User Experiences (VUX), or Conversational Design UX.

This has led to Apple focusing on the development of SiriKit a few years ago and providing third-party developers the ability to extend their apps to allow users to converse with their apps more naturally. As SiriKit opens up more to third-party developers, we are starting to see more apps becoming part of SiriKit, such as prominent messaging apps WhatsApp and Skype, as well as payment apps like Venmo and Apple Pay. 

SiriKit’s aim is to blur the boundaries between apps through a consistent conversational user experience that enables apps to remain intuitive, functional and engaging through pre-defined intents and domains. This tutorial will help you apply best practices to create intuitive conversational design user experiences without visual cues. 

Objectives of This Tutorial

This tutorial will teach you to design audibly engaging SiriKit-enabled apps through best-practices in VUX. You will learn about:

  • designing for voice interactions
  • applying conversational design UX best practices
  • testing SiriKit-enabled apps

Assumed Knowledge

I'll assume you have worked with SiriKit previously, and have some experience coding with Swift and Xcode. 

Designing for Voice Interactions

Creating engaging apps requires a well thought-out design for user experience—UX design for short. One common underlying principle to all mobile platforms is that design is based on a visual user interface. However, when designing for platforms where users engage via voice, you don’t have the advantage of visual cues to help guide users. This brings a completely new set of design challenges.

The absence of a graphical user interface forces users to understand how to communicate with their devices by voice, determining what they are able to say as they navigate between various states in order to achieve their goals. The Interaction Design Foundation describes the situation in conservational user experience: 

“In voice user interfaces, you cannot create visual affordances. Users will have no clear indications of what the interface can do or what their options are.” 

As a designer, you will need to understand how people communicate with technologies naturally—the fundamentals of voice interaction. According to recent studies by Stanford, users generally perceive and interact with voice interfaces in much the same way they converse with other people, irrespective of the fact that they are aware they are speaking to a device. 

The difficulty in being able to anticipate the different ways in which people speak has led to advances in machine learning over the past few years, with natural language processing (NLP) allowing platforms to understand humans in a more natural manner, by recognizing intents and associative domains of commands. One prominent platform is Apple’s Siri, and its framework for third-party developers, SiriKit. 

Overview of SiriKit

While most would understand Siri as primarily focusing on non-visual voice assistance, its integration into Apple’s ecosystem allows users to trigger voice interactions through their operating system, be it iOS, watchOS, CarPlay, or the HomePod. 

The first three platforms provide limited visual guidance in addition to audible feedback, whereas the HomePod only provides audible feedback. Using iOS or CarPlay whilst driving, the platform will provide even less visual feedback and more audio feedback, so the amount of information a user receives is dynamic. As an app designer, you will need to cater to both kinds of interactions. 

Providing commands with Siri

This means that SiriKit calibrates how much it offers visually or verbally based on the state of the device and user, and as long as you conform to best practices, SiriKit will gracefully handle all of these for you. 

Intents and Domain Handling

The framework handles user requests through two primary processes: intents and domain handling. 

Intents are managed through the voice-handling Intents framework, the Intents App Extension, which takes user requests and turns them into app-specific actions, such as booking a car-share ride or sending money to someone. 

Siri Interaction Workflow

The Intents UI app extension, on the other hand, allows you to deliver minimal visual content confirmation once a user has made a request and your app wants to confirm prior to completing the request.

Calling a ride-share app with Siri

SiriKit classifies intents (user requests) into specific types, called domains. As of iOS 11, third-party developers are able to leverage the following domains and interactions:

List of SiriKit domains and interactions

It may initially seem that the choice is quite limited, but Apple’s justification is that this helps manage user confidence and expectations carefully while gradually allowing users to learn and build their knowledge of how to interact with Siri. This also allows Apple and the community to scale over time whilst blurring the boundaries between apps that sit behind the voice-assistance interface. 

iOS developers taking advantage of SiriKit also benefit from the contextual knowledge the platform provides. This includes grouping sentences by conversational context, according to the intents and domains that machine learning delivers. That is, Siri will try to figure out whether your next command is part of the same conversational context or a new one. If you say “Please book an Uber ride”, Siri would know that you are intending to book a car-share ride, in the carshare domain. However, the app would need more information, such as what type of ride. If your next command was “Uber Pool”, it would know that the second command is within the same context. 

Leveraging SiriKit allows you to benefit from Apple’s platform orchestrating a lot of the heavy lifting, allowing you to focus on what’s important, which is developing value. Having said that, you still need to be a good ‘Siri citizen’. Next, you will next learn about various best practices you can follow in order to create a conducive user experience with non-visual communication and voice interaction.

For more information on developing with SiriKit, check out Create SiriKit Extensions in iOS 10

Applying Conversational Design UX Best Practices 

Let's take a look at some best practices that you can immediately apply to your SiriKit extension in order to ensure your app provides a pleasant, logical and intuitive conversational voice interface for your users. 

Inform Users of their Options 

The first guiding principle is to succinctly inform your users of what options they have during a particular state of interaction. 

Whereas graphical user experiences can effortlessly provide a visual context back to their users, such as through modal dialog boxes, for instance, that same luxury doesn’t exist with voice-enabled apps. Users have varied expectations on what natural language processing can handle. Some will be very conservative and might not realize the power of Siri, and others might start by asking something complex that doesn’t make sense to Siri. 

You need to design your user experience so as to provide users with information on what they are able to do at a particular juncture, in the form of options. 

Siris global options

What options your app returns should be contextually relevant. In the following screenshot, the contact person has multiple phone numbers, and if the user hasn’t explicitly stated which one to use, the user should be prompted. 

Contact Resolution with SiriKit

SiriKit uses contact resolution, which you have access to via the SDK, to guide the app to determine which contact phone number (or even which contact name if more than one contact entry has the same name) the end-user intended. According to Apple’s documentation:

During resolution, SiriKit prompts you to verify each parameter individually by calling the resolution methods of your handler object. In each method, you validate the provided data and create a resolution result object indicating your success or failure in resolving the parameter. SiriKit uses your resolution result object to determine how to proceed. For example, if your resolution result asks the user to disambiguate from among two or more choices, SiriKit prompts the user to select one of those choices.

For more information on resolving user intents, refer to Apple’s documentation on Resolving and handling intents

Be Fast & Accurate

It is important that in your app’s conversational user experience, you respond to commands expeditiously, as users expect a fast response. What this means is that you should design your interaction workflow to provide the quickest set of actions to achieve function completion without unnecessary prompts or screens. 

Apple encourages you to take the user directly to the content without any intermediary screens or messages. If a user needs to be authenticated, take the user to the authentication screen directly, and then make sure to maintain the context so the user can continue in a logical manner to complete his or her actions. Apple’s Human Interface Guidelines advise that you will need to:

Respond quickly and minimize interaction. People use Siri for convenience and expect a fast response. Present efficient, focused choices that reduce the possibility of additional prompting. 

Limit the Amount of Information

The Amazon Echo design guidelines recommend that you don’t list more than three different options in a single interaction, but rather provide users with the most popular options first. Then, if you need to provide more than three options, provide an option at the end to go through the rest of them. 

Siri Options in response

Prioritize and order the options according to which the users would most likely use, and allow users to explicitly call out some of the less popular options without reading them out. You could dynamically adjust the prominent options based on your users' historical preferences as well. 

Most importantly, don’t demonstrate prejudice or deception! That is, don’t misrepresent information or weigh the prompts to prioritize the most expensive options—for example, listing the most expensive car-share ride options first with the cheaper car-pooling options last. This is a sure way for your customers to lose confidence and trust in your app. 

Provide Conversational Breadcrumbs 

It's hard for users to work out where they are without visual cues, and even if SiriKit can keep track of the current context, users tend to interact with SiriKit whilst doing something else, such as driving or jogging. 

Therefore, you always need to provide an informative response to a command, not only confirming it but reminding the user of the context. For instance, when the user asks to book a car-share ride, you can provide context around your response by saying something like: “You’ve booked a ride for today at 5 pm, using AcmeCar” instead of just responding with “booking confirmed”. 

In other words, provide enough contextual information in your response for the user to understand what has been confirmed, without having to glance at her or his phone as confirmation of the user’s intentions. 

Provide an Experience That Doesn’t Require Touching or Glancing

As Apple’s ecosystem of Siri-enabled devices expands beyond iOS and watchOS into devices that lack a visual interface, it is important that your responses and interactions don’t require users to glance back at the screen or even touch their devices to confirm something. The verbal responses should be contextual and succinct enough (including providing a limited subset of options), giving users just the right amount of information for them to continue to blindly interact with their device. 

The power of Siri comes from users being able to have their iPhones in their pockets and use headphones to interact with their voice assistants, to shout a new reminder to their HomePods from across the room, or to listen to messages whilst driving their CarKit-enabled vehicles. Interacting with their SiriKit-enabled apps should only require secondary focus and attention, not primary touching or visual confirmation. 

The exception, however, is when an intent requires an extra layer of security and authentication prior to fulfilling the request. 

Require Authentication for Certain Intents

It is important that you identify intents that do require specific authentication and authorization prior to being used. If a user asks “What is the weather”, you won’t need authentication. However, if a user asks to “Pay Jane $20 with Venmo”, you obviously should require that the user authenticate first. 

SiriKit manages intent restriction where users would need to authenticate via FaceID, TouchID or a passcode when the device is locked, by requiring that you specify in your application’s info.plist explicitly which intents are restricted while locked:

Anticipate and Handle Errors 

As well as using voice prompting to handle disambiguation, as discussed earlier, you will also need to ensure that you anticipate and handle as many error scenarios as you can. 

For instance, when a user tries to send money to another participant and that participant doesn’t have an email address which is required or has multiple numbers, you need to handle that. SiriKit provides the INIntentResolutionResult class which allows you to set a resolution for the appropriate data type you are trying to resolve: 

Apple recommends that you try and extrapolate historical information from user behaviors where possible, to reduce the number of interaction steps in the workflow. Take a look at the  INIntentError documentation, which provides a set of possible errors you can handle, such as interactionOperationNotSupported or requestTimedOut.

Add Custom Vocabularies

SiriKit supports adding custom vocabularies through the use of the plist file AppIntentVocabulary.plist to help improve your app’s conversational user experience. You can use this for onboarding users as well as for including specific terms that your app can recognize. 

Providing your users with example commands helps with onboarding by guiding your users to your app's capabilities. If you ask Siri "What can you do?" it will prompt you not only with the inbuilt functionalities that are possible, but also with third-party apps. To promote your app’s functionalities to your users globally, include intent examples in your AppIntentVocabulary.plist file:

You can also help Siri understand and recognize terms that are specific only to your app by supplying it with an array of vocabulary words. These are words that apply to any user of your app (such as if you have a specific term for messaging that your app uses), but if you need to provide user-specific terms, take advantage of INVocabulary. Within the plist, you add a ParameterVocabularies key for your custom global terms and associate each term with a specific corresponding intent property object (you may have multiple intents for each term). 

Consult Apple’s documentation on Registering Custom Vocabulary with SiriKit to learn how to create user-specific terms. 

Testing Your SiriKit-Enabled Apps 

Finally, as of Xcode 9, you can conveniently test Siri using your Simulator by triggering Siri via the new XCUIDevice subclass XCUISiriService. Leverage this to test all of your intent phases, custom vocabularies, and even app synonyms, and to ensure your designed user experiences work as intended. 

For the purposes of this tutorial, clone the tutorial project repo, open up the project in Xcode, and run it to make sure it works in your environment. With the simulator running, enable Siri by going to Settings. Summon Siri on your Simulator as you would do on your physical device and say something like “Send a message to Jane”. 

Next, in Xcode, open the file titled MessagingIntentsUITests.swift and you will notice the single test case method:

Testing SiriKit with Xcode and Simulator

You can add as many intents as you want to test. Finally, go ahead and run this test case and you will observe the Simulator trigger Siri and speak out the intended command. Remember, this is not a substitute for real human testing and dealing with different accents and background noise, but it's useful nonetheless as part of your automated workflow.

Conclusion

Designing user experiences for voice interaction is a whole new world from visual UX design. Best practices and techniques are still being discovered by the designers and developers who are pioneering this new field. 

This post has given you an overview of the current best practices for conversation design UX on iOS with SiriKit. You saw some of the key principles in designing a voice interface, as well as some of the ways that you can interface with SiriKit as a dev. I hope this has inspired you to experiment with voice interfaces in your next app!

While you're here, check out some of our other posts on cutting-edge iOS app development.

Or check out some of our comprehensive video courses on iOS app development:

2018-03-13T15:00:00.000Z2018-03-13T15:00:00.000ZDoron Katz

Conversation Design User Experiences for SiriKit and iOS

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-30687
Final product image
What You'll Be Creating

Introduction

A lot of articles, our site included, have focused on helping readers create amazing iOS apps by designing a great mobile user experience (UX). 

However, with the emergence of the Apple Watch a few years ago, alongside CarKit, and more recently the HomePod this year, we are starting to see a lot more apps and IoT appliances that use voice commands instead of visual interfaces. The prevalence of IoT devices such as the HomePod and other voice assistants, as well as the explosion in voice-assistant enabled third-party apps, has given rise to a whole new category of user experience design methodologies, focusing on Voice User Experiences (VUX), or Conversational Design UX.

This has led to Apple focusing on the development of SiriKit a few years ago and providing third-party developers the ability to extend their apps to allow users to converse with their apps more naturally. As SiriKit opens up more to third-party developers, we are starting to see more apps becoming part of SiriKit, such as prominent messaging apps WhatsApp and Skype, as well as payment apps like Venmo and Apple Pay. 

SiriKit’s aim is to blur the boundaries between apps through a consistent conversational user experience that enables apps to remain intuitive, functional and engaging through pre-defined intents and domains. This tutorial will help you apply best practices to create intuitive conversational design user experiences without visual cues. 

Objectives of This Tutorial

This tutorial will teach you to design audibly engaging SiriKit-enabled apps through best-practices in VUX. You will learn about:

  • designing for voice interactions
  • applying conversational design UX best practices
  • testing SiriKit-enabled apps

Assumed Knowledge

I'll assume you have worked with SiriKit previously, and have some experience coding with Swift and Xcode. 

Designing for Voice Interactions

Creating engaging apps requires a well thought-out design for user experience—UX design for short. One common underlying principle to all mobile platforms is that design is based on a visual user interface. However, when designing for platforms where users engage via voice, you don’t have the advantage of visual cues to help guide users. This brings a completely new set of design challenges.

The absence of a graphical user interface forces users to understand how to communicate with their devices by voice, determining what they are able to say as they navigate between various states in order to achieve their goals. The Interaction Design Foundation describes the situation in conservational user experience: 

“In voice user interfaces, you cannot create visual affordances. Users will have no clear indications of what the interface can do or what their options are.” 

As a designer, you will need to understand how people communicate with technologies naturally—the fundamentals of voice interaction. According to recent studies by Stanford, users generally perceive and interact with voice interfaces in much the same way they converse with other people, irrespective of the fact that they are aware they are speaking to a device. 

The difficulty in being able to anticipate the different ways in which people speak has led to advances in machine learning over the past few years, with natural language processing (NLP) allowing platforms to understand humans in a more natural manner, by recognizing intents and associative domains of commands. One prominent platform is Apple’s Siri, and its framework for third-party developers, SiriKit. 

Overview of SiriKit

While most would understand Siri as primarily focusing on non-visual voice assistance, its integration into Apple’s ecosystem allows users to trigger voice interactions through their operating system, be it iOS, watchOS, CarPlay, or the HomePod. 

The first three platforms provide limited visual guidance in addition to audible feedback, whereas the HomePod only provides audible feedback. Using iOS or CarPlay whilst driving, the platform will provide even less visual feedback and more audio feedback, so the amount of information a user receives is dynamic. As an app designer, you will need to cater to both kinds of interactions. 

Providing commands with Siri

This means that SiriKit calibrates how much it offers visually or verbally based on the state of the device and user, and as long as you conform to best practices, SiriKit will gracefully handle all of these for you. 

Intents and Domain Handling

The framework handles user requests through two primary processes: intents and domain handling. 

Intents are managed through the voice-handling Intents framework, the Intents App Extension, which takes user requests and turns them into app-specific actions, such as booking a car-share ride or sending money to someone. 

Siri Interaction Workflow

The Intents UI app extension, on the other hand, allows you to deliver minimal visual content confirmation once a user has made a request and your app wants to confirm prior to completing the request.

Calling a ride-share app with Siri

SiriKit classifies intents (user requests) into specific types, called domains. As of iOS 11, third-party developers are able to leverage the following domains and interactions:

List of SiriKit domains and interactions

It may initially seem that the choice is quite limited, but Apple’s justification is that this helps manage user confidence and expectations carefully while gradually allowing users to learn and build their knowledge of how to interact with Siri. This also allows Apple and the community to scale over time whilst blurring the boundaries between apps that sit behind the voice-assistance interface. 

iOS developers taking advantage of SiriKit also benefit from the contextual knowledge the platform provides. This includes grouping sentences by conversational context, according to the intents and domains that machine learning delivers. That is, Siri will try to figure out whether your next command is part of the same conversational context or a new one. If you say “Please book an Uber ride”, Siri would know that you are intending to book a car-share ride, in the carshare domain. However, the app would need more information, such as what type of ride. If your next command was “Uber Pool”, it would know that the second command is within the same context. 

Leveraging SiriKit allows you to benefit from Apple’s platform orchestrating a lot of the heavy lifting, allowing you to focus on what’s important, which is developing value. Having said that, you still need to be a good ‘Siri citizen’. Next, you will next learn about various best practices you can follow in order to create a conducive user experience with non-visual communication and voice interaction.

For more information on developing with SiriKit, check out Create SiriKit Extensions in iOS 10

Applying Conversational Design UX Best Practices 

Let's take a look at some best practices that you can immediately apply to your SiriKit extension in order to ensure your app provides a pleasant, logical and intuitive conversational voice interface for your users. 

Inform Users of their Options 

The first guiding principle is to succinctly inform your users of what options they have during a particular state of interaction. 

Whereas graphical user experiences can effortlessly provide a visual context back to their users, such as through modal dialog boxes, for instance, that same luxury doesn’t exist with voice-enabled apps. Users have varied expectations on what natural language processing can handle. Some will be very conservative and might not realize the power of Siri, and others might start by asking something complex that doesn’t make sense to Siri. 

You need to design your user experience so as to provide users with information on what they are able to do at a particular juncture, in the form of options. 

Siris global options

What options your app returns should be contextually relevant. In the following screenshot, the contact person has multiple phone numbers, and if the user hasn’t explicitly stated which one to use, the user should be prompted. 

Contact Resolution with SiriKit

SiriKit uses contact resolution, which you have access to via the SDK, to guide the app to determine which contact phone number (or even which contact name if more than one contact entry has the same name) the end-user intended. According to Apple’s documentation:

During resolution, SiriKit prompts you to verify each parameter individually by calling the resolution methods of your handler object. In each method, you validate the provided data and create a resolution result object indicating your success or failure in resolving the parameter. SiriKit uses your resolution result object to determine how to proceed. For example, if your resolution result asks the user to disambiguate from among two or more choices, SiriKit prompts the user to select one of those choices.

For more information on resolving user intents, refer to Apple’s documentation on Resolving and handling intents

Be Fast & Accurate

It is important that in your app’s conversational user experience, you respond to commands expeditiously, as users expect a fast response. What this means is that you should design your interaction workflow to provide the quickest set of actions to achieve function completion without unnecessary prompts or screens. 

Apple encourages you to take the user directly to the content without any intermediary screens or messages. If a user needs to be authenticated, take the user to the authentication screen directly, and then make sure to maintain the context so the user can continue in a logical manner to complete his or her actions. Apple’s Human Interface Guidelines advise that you will need to:

Respond quickly and minimize interaction. People use Siri for convenience and expect a fast response. Present efficient, focused choices that reduce the possibility of additional prompting. 

Limit the Amount of Information

The Amazon Echo design guidelines recommend that you don’t list more than three different options in a single interaction, but rather provide users with the most popular options first. Then, if you need to provide more than three options, provide an option at the end to go through the rest of them. 

Siri Options in response

Prioritize and order the options according to which the users would most likely use, and allow users to explicitly call out some of the less popular options without reading them out. You could dynamically adjust the prominent options based on your users' historical preferences as well. 

Most importantly, don’t demonstrate prejudice or deception! That is, don’t misrepresent information or weigh the prompts to prioritize the most expensive options—for example, listing the most expensive car-share ride options first with the cheaper car-pooling options last. This is a sure way for your customers to lose confidence and trust in your app. 

Provide Conversational Breadcrumbs 

It's hard for users to work out where they are without visual cues, and even if SiriKit can keep track of the current context, users tend to interact with SiriKit whilst doing something else, such as driving or jogging. 

Therefore, you always need to provide an informative response to a command, not only confirming it but reminding the user of the context. For instance, when the user asks to book a car-share ride, you can provide context around your response by saying something like: “You’ve booked a ride for today at 5 pm, using AcmeCar” instead of just responding with “booking confirmed”. 

In other words, provide enough contextual information in your response for the user to understand what has been confirmed, without having to glance at her or his phone as confirmation of the user’s intentions. 

Provide an Experience That Doesn’t Require Touching or Glancing

As Apple’s ecosystem of Siri-enabled devices expands beyond iOS and watchOS into devices that lack a visual interface, it is important that your responses and interactions don’t require users to glance back at the screen or even touch their devices to confirm something. The verbal responses should be contextual and succinct enough (including providing a limited subset of options), giving users just the right amount of information for them to continue to blindly interact with their device. 

The power of Siri comes from users being able to have their iPhones in their pockets and use headphones to interact with their voice assistants, to shout a new reminder to their HomePods from across the room, or to listen to messages whilst driving their CarKit-enabled vehicles. Interacting with their SiriKit-enabled apps should only require secondary focus and attention, not primary touching or visual confirmation. 

The exception, however, is when an intent requires an extra layer of security and authentication prior to fulfilling the request. 

Require Authentication for Certain Intents

It is important that you identify intents that do require specific authentication and authorization prior to being used. If a user asks “What is the weather”, you won’t need authentication. However, if a user asks to “Pay Jane $20 with Venmo”, you obviously should require that the user authenticate first. 

SiriKit manages intent restriction where users would need to authenticate via FaceID, TouchID or a passcode when the device is locked, by requiring that you specify in your application’s info.plist explicitly which intents are restricted while locked:

Anticipate and Handle Errors 

As well as using voice prompting to handle disambiguation, as discussed earlier, you will also need to ensure that you anticipate and handle as many error scenarios as you can. 

For instance, when a user tries to send money to another participant and that participant doesn’t have an email address which is required or has multiple numbers, you need to handle that. SiriKit provides the INIntentResolutionResult class which allows you to set a resolution for the appropriate data type you are trying to resolve: 

Apple recommends that you try and extrapolate historical information from user behaviors where possible, to reduce the number of interaction steps in the workflow. Take a look at the  INIntentError documentation, which provides a set of possible errors you can handle, such as interactionOperationNotSupported or requestTimedOut.

Add Custom Vocabularies

SiriKit supports adding custom vocabularies through the use of the plist file AppIntentVocabulary.plist to help improve your app’s conversational user experience. You can use this for onboarding users as well as for including specific terms that your app can recognize. 

Providing your users with example commands helps with onboarding by guiding your users to your app's capabilities. If you ask Siri "What can you do?" it will prompt you not only with the inbuilt functionalities that are possible, but also with third-party apps. To promote your app’s functionalities to your users globally, include intent examples in your AppIntentVocabulary.plist file:

You can also help Siri understand and recognize terms that are specific only to your app by supplying it with an array of vocabulary words. These are words that apply to any user of your app (such as if you have a specific term for messaging that your app uses), but if you need to provide user-specific terms, take advantage of INVocabulary. Within the plist, you add a ParameterVocabularies key for your custom global terms and associate each term with a specific corresponding intent property object (you may have multiple intents for each term). 

Consult Apple’s documentation on Registering Custom Vocabulary with SiriKit to learn how to create user-specific terms. 

Testing Your SiriKit-Enabled Apps 

Finally, as of Xcode 9, you can conveniently test Siri using your Simulator by triggering Siri via the new XCUIDevice subclass XCUISiriService. Leverage this to test all of your intent phases, custom vocabularies, and even app synonyms, and to ensure your designed user experiences work as intended. 

For the purposes of this tutorial, clone the tutorial project repo, open up the project in Xcode, and run it to make sure it works in your environment. With the simulator running, enable Siri by going to Settings. Summon Siri on your Simulator as you would do on your physical device and say something like “Send a message to Jane”. 

Next, in Xcode, open the file titled MessagingIntentsUITests.swift and you will notice the single test case method:

Testing SiriKit with Xcode and Simulator

You can add as many intents as you want to test. Finally, go ahead and run this test case and you will observe the Simulator trigger Siri and speak out the intended command. Remember, this is not a substitute for real human testing and dealing with different accents and background noise, but it's useful nonetheless as part of your automated workflow.

Conclusion

Designing user experiences for voice interaction is a whole new world from visual UX design. Best practices and techniques are still being discovered by the designers and developers who are pioneering this new field. 

This post has given you an overview of the current best practices for conversation design UX on iOS with SiriKit. You saw some of the key principles in designing a voice interface, as well as some of the ways that you can interface with SiriKit as a dev. I hope this has inspired you to experiment with voice interfaces in your next app!

While you're here, check out some of our other posts on cutting-edge iOS app development.

Or check out some of our comprehensive video courses on iOS app development:

2018-03-13T15:00:00.000Z2018-03-13T15:00:00.000ZDoron Katz

Detaching Expo Apps to ExpoKit: Concepts

$
0
0

In this post, you'll learn what ExpoKit is and how it is used for adding native functionality to Expo apps. You'll also learn some of its pros and cons. 

In my Easier React Native Development With Expo post, you learned about how Expo makes it easier for beginners to begin creating apps with React Native. You also learned that Expo allows developers to get up and running with developing React Native apps faster because there's no longer a need to set up Android Studio, Xcode, or other development tools.

But as you have also seen, Expo doesn't support all of the native features that an app might need. Though the Expo team is always working to support more native functionality, it's a good idea to learn how to convert an existing Expo project to a standard native project so you can easily transition if the need arises. So, in this two-part series, we'll take a look at how to do that. 

In this post, you'll learn what ExpoKit is and when you're going to need it, as well as which of the Expo platform features are retained and lost once you detach to ExpoKit. 

Prerequisites

This tutorial assumes that you've already set up your computer for Expo and React Native development. This means you will need either Android Studio or Xcode or both, depending on where you want to deploy. Be sure to check out the Get Started With Expo guide, and also the "Getting Started" guide in the React Native docs under the "Building Projects with Native Code" tab for your specific platform if you haven't done so already. 

Knowledge of Node.js is helpful but not required.

What Is ExpoKit?

ExpoKit is an Objective-C and Java library that allows you to use the Expo platform within a standard React Native project. When I say "standard React Native project", I mean one that was created using the react-native init command. 

The downside of detaching to ExpoKit is that you will have to set up the standard native development environment for React Native!

Another downside is that you're limited to the React and React Native version used by ExpoKit at the time you detach your app. This means that there might be compatibility issues that you will need to resolve if the native module you're trying to install depends on an earlier version of React or React Native. 

If you think your app is going to need a whole lot of native modules which the built-in React Native and Expo APIs don't already support, I suggest you avoid using the Expo APIs. That way, you can easily "eject" to a standard React Native project at the time you need to start using custom native modules. 

When to Detach to ExpoKit?

You might want to detach your existing Expo project for any of the following reasons:

  • The API exposed by native features supported by Expo doesn't cover your use case.
  • You need to use a native functionality that's not currently supported by the Expo platform. Examples include Bluetooth and background tasks.
  • You want to use specific services. Currently, Expo uses Firebase for real-time data and Sentry for error reporting. If you want to use an alternative service, your only option is to write your own code for communicating to the HTTP API about the services you want to use or to install an existing native module that does the job.
  • You have an existing Continuous Integration setup which doesn't play well with Expo—for example, if you're using Fastlane or Bitrise for continuous integration. Expo doesn't really integrate with those services out of the box, so you'll have to write your own integration code if you want to use them while still on the Expo platform.

Features Retained When Detaching to ExpoKit

Detaching to ExpoKit means that you will lose some of the features offered by the Expo platform. However, the following essential features are still retained:

  • Expo APIs. You'll still be able to use Expo APIs such as the Permissions API.
  • Live Reload. Detached Expo apps are still able to use live reload while you're developing the app. The only difference is that you'll no longer be able to use the Expo client app. If you're developing for Android, you can still use your Android device or an emulator such as Genymotion to test the app. If you're developing for iOS, the app can be run on the simulators you installed in Xcode. You can also run it on your iPhone or iPad, but you need to follow some additional steps which I won't be covering in this tutorial.

Features You Lose When Detaching to ExpoKit

By detaching to ExpoKit, you will lose the following features:

  • Easy app sharing by means of QR code and Expo Snack. Once you've detached to ExpoKit, you'll notice that you can still share your app via the Expo XDE. It will still generate a QR code, but that code will no longer work when you scan it with the Expo client app.
  • Building standalone apps via Expo's servers. You can no longer use the exp build command to build the .ipa or .apk files on Expo's servers. This means that you have to install Android Studio or Xcode (depending on which platform you want to deploy) and build the app locally yourself. Alternatively, you can use Microsoft App Center to build the app if you don't have a local development environment set up yet. Note that you cannot use commands like react-native run-android or react-native run-ios to run the app, as you would in a standard React Native project. 
  • Expo's Push Notifications service. Expo no longer manages your push certificates after detaching, so the push notification pipeline needs to be manually managed.

What We'll Be Creating

To showcase the benefit of detaching to ExpoKit, we'll be creating an app which needs a native feature that the Expo platform does not currently support. The app will be a location-sharing app. It will mostly run in the background, fetching the user's current location. It will then send that location via Pusher. We'll also create a web page showing the user's current location on a map.

Here's what the app will look like:

location tracking app

You can find the full source of the project in the tutorial GitHub repo.

Setting Up the App

In the remainder of this post, we'll focus on getting our app set up. Then, in the next post, we'll flesh out some of the key code to interact with ExpoKit.

Creating a Pusher App

If you want to use Pusher's services in your app, you'll need to create an app in the Pusher dashboard. Once logged in, go to your dashboard, click on Your apps and then Create new app, and enter the name of the app:

create Pusher app

Once the app is created, go to App Settings and check the Enable client events check box. This will allow us to trigger Pusher events directly from the app instead of from a server. Then click on Update to save the changes:

enable client events

You can find the keys under the App keys tab. We will be needing those later, once we connect to the Pusher app.

Creating a Google App

Similarly, we need to create a Google project in order to use the Google Maps API and the Geolocation API. Go to console.developers.google.com and create a new project:

create Google project

Next, go to the project dashboard and click on Enable APIs and Services. Search for Google Maps JavaScript API and Google Maps Geocoding API and enable those.

From the project dashboard, go to Credentials and click on Create Credentials > API Key. Take note of the API key that it generates as we will be using it later.

Creating a New Expo Project

Run the following commands in your working directory:

Now the Expo app is ready to test. Just scan the QR code with your Expo client app for iOS or Android.

Coding the App

Now we're ready to start coding the app. We'll start developing as a standard Expo project, and then we'll detach to ExpoKit when we need to use custom native features.

Generating the Unique Tracking Code

Clear the contents of the App.js file in the root of the project directory and add the following imports:

We'll also use a custom header component:

In the constructor, set the unique_code to its initial state:

The UI of our app will display this unique code.

Finally, here's the code for the Header (components/Header.js) component:

Conclusion

This has been the first part of our two-part series on detaching Expo apps to ExpoKit. In this post, you learned the concepts behind ExpoKit and began setting up a project that will use ExpoKit functionality. 

In the next post, we'll detach the app to ExpoKit and then continue coding it so we can run it on a device.

In the meantime, check out some of our other posts about React Native app development!

2018-03-14T12:00:00.000Z2018-03-14T12:00:00.000ZWern Ancheta

Detaching Expo Apps to ExpoKit: Concepts

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-30661

In this post, you'll learn what ExpoKit is and how it is used for adding native functionality to Expo apps. You'll also learn some of its pros and cons. 

In my Easier React Native Development With Expo post, you learned about how Expo makes it easier for beginners to begin creating apps with React Native. You also learned that Expo allows developers to get up and running with developing React Native apps faster because there's no longer a need to set up Android Studio, Xcode, or other development tools.

But as you have also seen, Expo doesn't support all of the native features that an app might need. Though the Expo team is always working to support more native functionality, it's a good idea to learn how to convert an existing Expo project to a standard native project so you can easily transition if the need arises. So, in this two-part series, we'll take a look at how to do that. 

In this post, you'll learn what ExpoKit is and when you're going to need it, as well as which of the Expo platform features are retained and lost once you detach to ExpoKit. 

Prerequisites

This tutorial assumes that you've already set up your computer for Expo and React Native development. This means you will need either Android Studio or Xcode or both, depending on where you want to deploy. Be sure to check out the Get Started With Expo guide, and also the "Getting Started" guide in the React Native docs under the "Building Projects with Native Code" tab for your specific platform if you haven't done so already. 

Knowledge of Node.js is helpful but not required.

What Is ExpoKit?

ExpoKit is an Objective-C and Java library that allows you to use the Expo platform within a standard React Native project. When I say "standard React Native project", I mean one that was created using the react-native init command. 

The downside of detaching to ExpoKit is that you will have to set up the standard native development environment for React Native!

Another downside is that you're limited to the React and React Native version used by ExpoKit at the time you detach your app. This means that there might be compatibility issues that you will need to resolve if the native module you're trying to install depends on an earlier version of React or React Native. 

If you think your app is going to need a whole lot of native modules which the built-in React Native and Expo APIs don't already support, I suggest you avoid using the Expo APIs. That way, you can easily "eject" to a standard React Native project at the time you need to start using custom native modules. 

When to Detach to ExpoKit?

You might want to detach your existing Expo project for any of the following reasons:

  • The API exposed by native features supported by Expo doesn't cover your use case.
  • You need to use a native functionality that's not currently supported by the Expo platform. Examples include Bluetooth and background tasks.
  • You want to use specific services. Currently, Expo uses Firebase for real-time data and Sentry for error reporting. If you want to use an alternative service, your only option is to write your own code for communicating to the HTTP API about the services you want to use or to install an existing native module that does the job.
  • You have an existing Continuous Integration setup which doesn't play well with Expo—for example, if you're using Fastlane or Bitrise for continuous integration. Expo doesn't really integrate with those services out of the box, so you'll have to write your own integration code if you want to use them while still on the Expo platform.

Features Retained When Detaching to ExpoKit

Detaching to ExpoKit means that you will lose some of the features offered by the Expo platform. However, the following essential features are still retained:

  • Expo APIs. You'll still be able to use Expo APIs such as the Permissions API.
  • Live Reload. Detached Expo apps are still able to use live reload while you're developing the app. The only difference is that you'll no longer be able to use the Expo client app. If you're developing for Android, you can still use your Android device or an emulator such as Genymotion to test the app. If you're developing for iOS, the app can be run on the simulators you installed in Xcode. You can also run it on your iPhone or iPad, but you need to follow some additional steps which I won't be covering in this tutorial.

Features You Lose When Detaching to ExpoKit

By detaching to ExpoKit, you will lose the following features:

  • Easy app sharing by means of QR code and Expo Snack. Once you've detached to ExpoKit, you'll notice that you can still share your app via the Expo XDE. It will still generate a QR code, but that code will no longer work when you scan it with the Expo client app.
  • Building standalone apps via Expo's servers. You can no longer use the exp build command to build the .ipa or .apk files on Expo's servers. This means that you have to install Android Studio or Xcode (depending on which platform you want to deploy) and build the app locally yourself. Alternatively, you can use Microsoft App Center to build the app if you don't have a local development environment set up yet. Note that you cannot use commands like react-native run-android or react-native run-ios to run the app, as you would in a standard React Native project. 
  • Expo's Push Notifications service. Expo no longer manages your push certificates after detaching, so the push notification pipeline needs to be manually managed.

What We'll Be Creating

To showcase the benefit of detaching to ExpoKit, we'll be creating an app which needs a native feature that the Expo platform does not currently support. The app will be a location-sharing app. It will mostly run in the background, fetching the user's current location. It will then send that location via Pusher. We'll also create a web page showing the user's current location on a map.

Here's what the app will look like:

location tracking app

You can find the full source of the project in the tutorial GitHub repo.

Setting Up the App

In the remainder of this post, we'll focus on getting our app set up. Then, in the next post, we'll flesh out some of the key code to interact with ExpoKit.

Creating a Pusher App

If you want to use Pusher's services in your app, you'll need to create an app in the Pusher dashboard. Once logged in, go to your dashboard, click on Your apps and then Create new app, and enter the name of the app:

create Pusher app

Once the app is created, go to App Settings and check the Enable client events check box. This will allow us to trigger Pusher events directly from the app instead of from a server. Then click on Update to save the changes:

enable client events

You can find the keys under the App keys tab. We will be needing those later, once we connect to the Pusher app.

Creating a Google App

Similarly, we need to create a Google project in order to use the Google Maps API and the Geolocation API. Go to console.developers.google.com and create a new project:

create Google project

Next, go to the project dashboard and click on Enable APIs and Services. Search for Google Maps JavaScript API and Google Maps Geocoding API and enable those.

From the project dashboard, go to Credentials and click on Create Credentials > API Key. Take note of the API key that it generates as we will be using it later.

Creating a New Expo Project

Run the following commands in your working directory:

Now the Expo app is ready to test. Just scan the QR code with your Expo client app for iOS or Android.

Coding the App

Now we're ready to start coding the app. We'll start developing as a standard Expo project, and then we'll detach to ExpoKit when we need to use custom native features.

Generating the Unique Tracking Code

Clear the contents of the App.js file in the root of the project directory and add the following imports:

We'll also use a custom header component:

In the constructor, set the unique_code to its initial state:

The UI of our app will display this unique code.

Finally, here's the code for the Header (components/Header.js) component:

Conclusion

This has been the first part of our two-part series on detaching Expo apps to ExpoKit. In this post, you learned the concepts behind ExpoKit and began setting up a project that will use ExpoKit functionality. 

In the next post, we'll detach the app to ExpoKit and then continue coding it so we can run it on a device.

In the meantime, check out some of our other posts about React Native app development!

2018-03-14T12:00:00.000Z2018-03-14T12:00:00.000ZWern Ancheta

Supercharging Your React Native App With AWS Amplify

$
0
0
Final product image
What You'll Be Creating

AWS Amplify is an open-source library that enables developers, and in our case mobile developers, to add a host of valuable functionality to applications including analytics, push notifications, storage, and authentication.

Amplify works not only with React Native, but also with Vue, Angular, Ionic, React web and really any JavaScript framework. In this tutorial, we will be demonstrating some of its core functionality within a React Native application.

The great thing about this library is that it abstracts away what used to be complex setup and configuration for all of these features into an easy-to-use module that we can add to our project by using NPM.

We'll cover AWS Amplify in three parts: authentication, storage, and analytics.

React Native Installation & Setup

If you would like to follow along, create a new React Native project using either Expo (Create React Native App) or the React Native CLI:

or

Next, let's go ahead and install the aws-amplify library using either yarn or npm:

If you are using Expo, you can skip the next step (linking) as Expo already ships the native dependencies for Amazon Cognito support.

If you are not using Expo, we need to link the Cognito library (Amazon Cognito handles authentication) that was installed when we added aws-amplify

AWS Setup

Now that the React Native project is created and configured, we need to set up the Amazon services that we will be interacting with.

Inside the directory of the new project, we will be creating a new Mobile Hub project for this tutorial. To create this project, we will be using the AWSMobile CLI, but feel free to use the console if you are a more advanced user. I've posted a YouTube video with a quick overview of the AWSMobile CLI if you want to learn more about it.

Now let's create a new Mobile Hub project in the root of our new project directory:

After you've created your project and have your aws-exports file (this is automatically created for you by running awsmobile init), we need to configure our React Native project with our AWS project using AWS Amplify.

To do so, we will go into App.js below the last imports and add the following three lines of code:

Authentication

Authentication with Amplify is done by using the Amazon Cognito service. We'll use this service to allow users to sign in and sign up to our application!

Adding Authentication Using the AWSMobile CLI

Let's add user signin and Amazon Cognito to our Mobile Hub project. In the root directory, run the following commands:

Now, we will have a new Amazon Cognito user pool set up and ready to go. (If you would like to view the details of this new service, go to the AWS Console, click on Cognito, and choose the name of the AWSMobile project name you created.)

Next, let's integrate Authentication with Amazon Cognito and AWS Amplify.

Auth Class

Let's begin by taking a look at the main class that you will be using to have full access to the Amazon Cognito services, the Auth class.

The Auth class has many methods that will allow you to do everything from signing up and signing in users to changing their password and everything in between.

It's also simple to work with two-factor authentication with AWS Amplify using the Auth class, as we will see in the following example.

Example

Let's take a look at how you might go about signing up and signing in a user using Amazon Cognito and the Auth class.

We can accomplish a solid sign-up and sign-in flow with relatively little work! We will be using the signUp, confirmSignUp, signIn, and confirmSignInmethods of the Auth class.

In App.js, let's create a few methods that will handle user sign-up with two-factor authentication as well as some state to hold the user input:

signUp creates the initial sign-up request, which will send an SMS to the new user to confirm their number. confirmSignUp takes the SMS code and the username and confirms the new user in Amazon Cognito.

We will also create a UI for the form input and a button, and wire the class methods to these UI elements. Update the render method to the following:

Finally, we will update our styles declaration so that we have a nicer UI:

Basic Sign In Form

To view the final version of this file, click here.

Now, we should be able to sign up, get a confirmation code sent to our phone number, and confirm by typing in the confirmation code.

If you would like to view the details of this newly created user, go back to the AWS Console, click on Cognito, choose the name of the AWSMobile project name you created, and click on Users and groups in the General settings menu on the left.

You can take this further by implementing sign-in and confirm sign-in. Let's take a look at the methods for signIn and confirmSignIn:

Accessing the User's Data and Session

Once the user is signed in, we can then use Auth to access user information!

We can call Auth.currentUserInfo() to get the user's profile information (username, email, etc.) or Auth.currentAuthenticatedUser() to get the user's idToken, JWT, and a lot of other useful information about the user's current logged-in session.

Analytics

AWS Amplify uses Amazon Pinpoint to handle metrics. When you create a new Mobile Hub project using either the CLI or Mobile Hub, you automatically have Amazon Pinpoint enabled, configured, and ready to go.

If you are not already familiar, Amazon Pinpoint is a service that not only allows developers to add Analytics to their mobile applications, but also lets them send push notifications, SMS messages, and emails to their users.

With AWS Amplify, we can send user session information as metrics to Amazon Pinpoint with a few lines of code.

Let's open the Amazon Pinpoint dashboard so we can view the events we are about to create. If you open your Mobile Hub project in the AWS console, choose Analytics in the top right corner, or go directly into Amazon Pinpoint from the console, and open the current project.

In the dark blue navigation bar on the left, there are four options: Analytics, Segments, Campaigns, and Direct. Choose Analytics.

Pinpoint Console

Once we begin creating sending events, we will be able to see them in this console view.

Now we're ready to start tracking! In App.js, remove all of the code from the last example, leaving us with basically just a render method containing a container View with a Text greeting, no state, no class methods, and only a container style.

We also import Analytics from aws-amplify:

Tracking Basic Events

One common metric that you may want to track is the number of times the user opens the app. Let's begin by creating an event that will track this.

In React Native, we have the AppState API, which will give us the current app state of active, background, or inactive. If the state is active, it means the user has opened the app. If the state is background, it means the app is running in the background and the user is either on the home screen or in another app, while inactive means the user is transitioning between active and foreground, multitasking, or is on a phone call.

When the app becomes active, let's send an event to our analytics that says the app was opened.

To do so, we will be calling the following event:

You can view the API for this method in the official docs. The record method takes three arguments: name (string), attributes (object, optional), and metrics (object, optional):

Let's update the class to add an event listener when the component is mounted, and remove it when the component is destroyed. This listener will call _handleAppStateChange whenever the app state changes:

Now, let's create the _handleAppStateChange method:

Now, we can move the app into the background, open it back up, and this should send an event to our Analytics dashboard. Note: To background the app on an iOS simulator or Android emulator, press Command-Shift-H.

To see this data in the dashboard, click on Events, and choose App opened! from the Events dropdown:

Filtering and viewing Analytics events

You'll also probably notice that you have other data available to you automatically from Mobile Hub, including session datauser sign up, and user sign in. I thought it was pretty cool that all this information is automatically recorded.

Tracking Detailed Events

Now let's take this to the next level. What if we wanted to track not only a user opening the app, but which user opened the app and how many times they opened the app?

We can easily do this by adding an attribute to the second metric!

Let's act as if we have a user signed in, and track a new event that we will call "User detail: App opened".

To do this, update the record event to the following:

Analytics.record('User detail - App opened!', { username: 'NaderDabit' })

Then, close and open the app a couple of times. We should now be able to see more detail about the event in our dashboard.

To do so, look to the right of the Event dropdown; there is an Attributes section. Here, we can drill down into the attributes for the event. In our case, we have the user name, so we can now filter this event by user name!

Adding attributes and filtering by attributes

Usage Metrics

The final item we will track is the usage metrics. This is the third argument to record.

Let's add a metric that records the accrued time that the user has been in the app. We can do this pretty easily by setting a time value in the class, incrementing it every second, and then sending this information along to Amazon Pinpoint when the user opens the app:

Here, we've created a value of time and set it to 0. We've also added a new startTimer method that will add 1 to the time value every second. In componentDidMount, we call startTimer which will increment the time value by 1 every second.

Now we can add a third argument to Analytics.record() that will record this value as a metric!

Storage

Let's look at how we can use Amplify with Amazon S3 to add storage to our app.

To add S3 to your Mobile Hub project, run the following commands:

Example Usage

AWS Amplify has a Storage API that we can import just as we have with the other APIs:

import { Storage } from 'aws-amplify'

We can then call methods on Storage like get, put, list, and remove to interact with items in our bucket.

When an S3 bucket is created, we will automatically have a default image in our bucket in the public folder; mine is called example-image.png. Let's see if we can use AWS Amplify to read and view this image from S3.

View of S3 bucket public folder

As I mentioned above, Storage has a get method that we will call to retrieve items, and the method to retrieve this image would look something like this:

To demonstrate this functionality in our React Native app, let's create some functionality that retrieves this image from S3 and shows it to our user.

We will need to import Imagefrom React Native, as well as Storage from aws-amplify.

Now, we will need to have some state to hold this image, as well as a method to retrieve the image and hold it in the state. Let's add the following to our class above the render method:

Finally, let's add some UI elements to retrieve the image and render it to the UI:

App screenshots showing the Get Image feature

Now, we should be able to click the button and view the image from S3!

To view the final version of this file, click here.

Conclusion

Overall, AWS Amplify provides a really easy way to accomplish what used to be complex functionality with not much code, integrating seamlessly with many AWS services.

We did not cover push notifications, which were also recently added to AWS Amplify, but these will be covered in an upcoming post!

AWS Amplify is actively being maintained, so if you have any feature requests or ideas, feel free to comment, submit an issue or pull request, or just star or watch the project if you wish to be kept up to date with future features!

And in the meantime, check out some of our other posts about coding React Native apps.

2018-03-23T16:19:40.330Z2018-03-23T16:19:40.330ZNader Dabit

Supercharging Your React Native App With AWS Amplify

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-30648
Final product image
What You'll Be Creating

AWS Amplify is an open-source library that enables developers, and in our case mobile developers, to add a host of valuable functionality to applications including analytics, push notifications, storage, and authentication.

Amplify works not only with React Native, but also with Vue, Angular, Ionic, React web and really any JavaScript framework. In this tutorial, we will be demonstrating some of its core functionality within a React Native application.

The great thing about this library is that it abstracts away what used to be complex setup and configuration for all of these features into an easy-to-use module that we can add to our project by using NPM.

We'll cover AWS Amplify in three parts: authentication, storage, and analytics.

React Native Installation & Setup

If you would like to follow along, create a new React Native project using either Expo (Create React Native App) or the React Native CLI:

or

Next, let's go ahead and install the aws-amplify library using either yarn or npm:

If you are using Expo, you can skip the next step (linking) as Expo already ships the native dependencies for Amazon Cognito support.

If you are not using Expo, we need to link the Cognito library (Amazon Cognito handles authentication) that was installed when we added aws-amplify

AWS Setup

Now that the React Native project is created and configured, we need to set up the Amazon services that we will be interacting with.

Inside the directory of the new project, we will be creating a new Mobile Hub project for this tutorial. To create this project, we will be using the AWSMobile CLI, but feel free to use the console if you are a more advanced user. I've posted a YouTube video with a quick overview of the AWSMobile CLI if you want to learn more about it.

Now let's create a new Mobile Hub project in the root of our new project directory:

After you've created your project and have your aws-exports file (this is automatically created for you by running awsmobile init), we need to configure our React Native project with our AWS project using AWS Amplify.

To do so, we will go into App.js below the last imports and add the following three lines of code:

Authentication

Authentication with Amplify is done by using the Amazon Cognito service. We'll use this service to allow users to sign in and sign up to our application!

Adding Authentication Using the AWSMobile CLI

Let's add user signin and Amazon Cognito to our Mobile Hub project. In the root directory, run the following commands:

Now, we will have a new Amazon Cognito user pool set up and ready to go. (If you would like to view the details of this new service, go to the AWS Console, click on Cognito, and choose the name of the AWSMobile project name you created.)

Next, let's integrate Authentication with Amazon Cognito and AWS Amplify.

Auth Class

Let's begin by taking a look at the main class that you will be using to have full access to the Amazon Cognito services, the Auth class.

The Auth class has many methods that will allow you to do everything from signing up and signing in users to changing their password and everything in between.

It's also simple to work with two-factor authentication with AWS Amplify using the Auth class, as we will see in the following example.

Example

Let's take a look at how you might go about signing up and signing in a user using Amazon Cognito and the Auth class.

We can accomplish a solid sign-up and sign-in flow with relatively little work! We will be using the signUp, confirmSignUp, signIn, and confirmSignInmethods of the Auth class.

In App.js, let's create a few methods that will handle user sign-up with two-factor authentication as well as some state to hold the user input:

signUp creates the initial sign-up request, which will send an SMS to the new user to confirm their number. confirmSignUp takes the SMS code and the username and confirms the new user in Amazon Cognito.

We will also create a UI for the form input and a button, and wire the class methods to these UI elements. Update the render method to the following:

Finally, we will update our styles declaration so that we have a nicer UI:

Basic Sign In Form

To view the final version of this file, click here.

Now, we should be able to sign up, get a confirmation code sent to our phone number, and confirm by typing in the confirmation code.

If you would like to view the details of this newly created user, go back to the AWS Console, click on Cognito, choose the name of the AWSMobile project name you created, and click on Users and groups in the General settings menu on the left.

You can take this further by implementing sign-in and confirm sign-in. Let's take a look at the methods for signIn and confirmSignIn:

Accessing the User's Data and Session

Once the user is signed in, we can then use Auth to access user information!

We can call Auth.currentUserInfo() to get the user's profile information (username, email, etc.) or Auth.currentAuthenticatedUser() to get the user's idToken, JWT, and a lot of other useful information about the user's current logged-in session.

Analytics

AWS Amplify uses Amazon Pinpoint to handle metrics. When you create a new Mobile Hub project using either the CLI or Mobile Hub, you automatically have Amazon Pinpoint enabled, configured, and ready to go.

If you are not already familiar, Amazon Pinpoint is a service that not only allows developers to add Analytics to their mobile applications, but also lets them send push notifications, SMS messages, and emails to their users.

With AWS Amplify, we can send user session information as metrics to Amazon Pinpoint with a few lines of code.

Let's open the Amazon Pinpoint dashboard so we can view the events we are about to create. If you open your Mobile Hub project in the AWS console, choose Analytics in the top right corner, or go directly into Amazon Pinpoint from the console, and open the current project.

In the dark blue navigation bar on the left, there are four options: Analytics, Segments, Campaigns, and Direct. Choose Analytics.

Pinpoint Console

Once we begin creating sending events, we will be able to see them in this console view.

Now we're ready to start tracking! In App.js, remove all of the code from the last example, leaving us with basically just a render method containing a container View with a Text greeting, no state, no class methods, and only a container style.

We also import Analytics from aws-amplify:

Tracking Basic Events

One common metric that you may want to track is the number of times the user opens the app. Let's begin by creating an event that will track this.

In React Native, we have the AppState API, which will give us the current app state of active, background, or inactive. If the state is active, it means the user has opened the app. If the state is background, it means the app is running in the background and the user is either on the home screen or in another app, while inactive means the user is transitioning between active and foreground, multitasking, or is on a phone call.

When the app becomes active, let's send an event to our analytics that says the app was opened.

To do so, we will be calling the following event:

You can view the API for this method in the official docs. The record method takes three arguments: name (string), attributes (object, optional), and metrics (object, optional):

Let's update the class to add an event listener when the component is mounted, and remove it when the component is destroyed. This listener will call _handleAppStateChange whenever the app state changes:

Now, let's create the _handleAppStateChange method:

Now, we can move the app into the background, open it back up, and this should send an event to our Analytics dashboard. Note: To background the app on an iOS simulator or Android emulator, press Command-Shift-H.

To see this data in the dashboard, click on Events, and choose App opened! from the Events dropdown:

Filtering and viewing Analytics events

You'll also probably notice that you have other data available to you automatically from Mobile Hub, including session datauser sign up, and user sign in. I thought it was pretty cool that all this information is automatically recorded.

Tracking Detailed Events

Now let's take this to the next level. What if we wanted to track not only a user opening the app, but which user opened the app and how many times they opened the app?

We can easily do this by adding an attribute to the second metric!

Let's act as if we have a user signed in, and track a new event that we will call "User detail: App opened".

To do this, update the record event to the following:

Analytics.record('User detail - App opened!', { username: 'NaderDabit' })

Then, close and open the app a couple of times. We should now be able to see more detail about the event in our dashboard.

To do so, look to the right of the Event dropdown; there is an Attributes section. Here, we can drill down into the attributes for the event. In our case, we have the user name, so we can now filter this event by user name!

Adding attributes and filtering by attributes

Usage Metrics

The final item we will track is the usage metrics. This is the third argument to record.

Let's add a metric that records the accrued time that the user has been in the app. We can do this pretty easily by setting a time value in the class, incrementing it every second, and then sending this information along to Amazon Pinpoint when the user opens the app:

Here, we've created a value of time and set it to 0. We've also added a new startTimer method that will add 1 to the time value every second. In componentDidMount, we call startTimer which will increment the time value by 1 every second.

Now we can add a third argument to Analytics.record() that will record this value as a metric!

Storage

Let's look at how we can use Amplify with Amazon S3 to add storage to our app.

To add S3 to your Mobile Hub project, run the following commands:

Example Usage

AWS Amplify has a Storage API that we can import just as we have with the other APIs:

import { Storage } from 'aws-amplify'

We can then call methods on Storage like get, put, list, and remove to interact with items in our bucket.

When an S3 bucket is created, we will automatically have a default image in our bucket in the public folder; mine is called example-image.png. Let's see if we can use AWS Amplify to read and view this image from S3.

View of S3 bucket public folder

As I mentioned above, Storage has a get method that we will call to retrieve items, and the method to retrieve this image would look something like this:

To demonstrate this functionality in our React Native app, let's create some functionality that retrieves this image from S3 and shows it to our user.

We will need to import Imagefrom React Native, as well as Storage from aws-amplify.

Now, we will need to have some state to hold this image, as well as a method to retrieve the image and hold it in the state. Let's add the following to our class above the render method:

Finally, let's add some UI elements to retrieve the image and render it to the UI:

App screenshots showing the Get Image feature

Now, we should be able to click the button and view the image from S3!

To view the final version of this file, click here.

Conclusion

Overall, AWS Amplify provides a really easy way to accomplish what used to be complex functionality with not much code, integrating seamlessly with many AWS services.

We did not cover push notifications, which were also recently added to AWS Amplify, but these will be covered in an upcoming post!

AWS Amplify is actively being maintained, so if you have any feature requests or ideas, feel free to comment, submit an issue or pull request, or just star or watch the project if you wish to be kept up to date with future features!

And in the meantime, check out some of our other posts about coding React Native apps.

2018-03-23T16:19:40.330Z2018-03-23T16:19:40.330ZNader Dabit

Detaching Expo Apps to ExpoKit

$
0
0

In my Easier React Native Development With Expo post, you learned about how Expo makes it easier for beginners to begin creating apps with React Native. You also learned that Expo allows developers to get up and running with developing React Native apps faster because there's no longer a need to set up Android Studio, Xcode, or other development tools.

But as you have also seen, Expo doesn't support all of the native features that an app might need. Though the Expo team is always working to support more native functionality, it's a good idea to learn how to convert an existing Expo project to a standard native project so you can easily transition if the need arises. 

So, in this two-part series, we're taking a look at how to do that. In the first part of the series, you learned the basic concepts of ExpoKit. In this post, we'll continue where we left off by actually detaching the app to ExpoKit and continue coding the location-sharing app. 

Detaching to ExpoKit

In order to detach to ExpoKit, you first have to edit the app.json and package.json files. 

In the app.json file, make sure that a name has been set. The platforms should be the platforms you want to build to.

If you want to build for iOS, you must specify the ios option:

If you want to support Android, then also specify the following option:

There are other options that were prefilled by the exp command-line tool when the project was created. But the only important ones are the bundleIdentifier for iOS and package for Android. These will be the unique IDs for the app once they get published to the Apple or Play store. Detaching requires those details because it actually generates the native code for the app to be run on a device. You can find more information about the different configuration options for the app.json file in the Expo documentation.

You can view the full contents of the file in the GitHub repo.

Next, open the package.json file and add the name of the project:

This should be the name that you used when you created the project using exp init. It's very important that they are the same because the name you specify in the package.json is used when compiling the app. Inconsistencies in this name will cause an error.

Now we're ready to detach to ExpoKit. Execute the following command at the root of the project directory:

This will download the native Expo packages for Android and iOS locally.

You should see an output similar to the following if it succeeded:

Expo detach

If you're deploying to iOS, you need to install the latest version of Xcode. At the time of writing of this tutorial, the latest version is 9. Next, install CocoaPods by executing sudo gem install cocoapods. This allows you to install the native iOS dependencies of the project. Once that's done, navigate to the ios directory of the project and execute pod install to install all the native dependencies. 

Installing Custom Native Packages

Now that we have detached, we can now install native packages just like in a standard React Native project. 

For this app, we'll need the React Native Background Timer and Pusher packages.

First, install the Pusher package because it's easier:

This allows us to communicate with the Pusher app you created earlier.

Next, install the React Native Background Timer. This allows us to periodically execute code (even when the app is in the background) based on a specific interval:

Unlike the Pusher package, this requires a native library (either iOS or Android) to be linked to the app. Executing the following command does that for you:

Once it's done, it should also initialize the module on android/app/src/main/host/exp/exponent/MainApplication.java. But just to make sure, check if the following exists in that file:

If you're building for iOS, open the Podfile inside the ios directory and make sure the following is added before the post_install declaration:

Once that's done, execute pod install inside the ios directory to install the native module.

For Android, this is already done automatically when you run the app using Android Studio.

Update the Android Manifest File

If you're building for Android, open the Android manifest file (android/app/src/main/AndroidManifest.xml) and make sure the following permissions are added:

This allows us to ask permission for Pusher to access the internet and Expo to get the user's current location on Android devices. 

Running the App

We're not yet done, but it's better to run the app now so you can already see if it works or not. That way, you can also see the changes while we're developing the app.

The first step in running the app is to execute exp start from the root directory of the project. This will start the development server so that any change you make to the source code will get reflected in the app preview.

If you're building for Android, open Android Studio and select Open an existing Android Studio project. In the directory selector that shows up, select the android folder inside the Expo project. Once you've selected the folder, it should index the files in that folder. At that point, you should now be able to rebuild the project by selecting Build > Rebuild Project from the top menu. Once that's done, run the app by selecting Run > Run 'app'.

Android Studio can run the app on any Android device connected to your computer, on one of the emulators you installed via Android Studio, or via Genymotion (Android Studio automatically detects a running emulator instance). For this app, I recommend you use Genymotion emulator since it has a nice GPS emulation widget that allows you to change the location via a Google Maps interface:

Genymotion location emulation

(If you're having problems running the app on your device, be sure to check out this Stack Overflow question on getting Android Studio to recognize your device.)

Once that's done, open the ios/ocdmom.xcworkspace file with Xcode. Once Xcode is done indexing the files, you should be able to hit that big play button and it will automatically run the app on your selected iOS simulator.

Xcode also allows you to mock the location, but only when you build the app for running in the simulator. Making a change to the code and having the development server refresh the app won't actually change the location. To change the location, click on the send icon and select the location you want to use:

Xcode location simulation

Continue Coding the App

Now we're ready to continue writing the code for the app. This time, we'll be adding the functionality to run some code while the app is in the background.

Adding a Background Task

Import the Pusher and Background Timer package that you installed earlier:

Set the value for the Google API key of the Google project you created earlier:

Use the Location and Permissions API from Expo:

Expo's APIs work cross-platform—this is not unlike a standard React Native project where you have to install a package like React Native Permissions to gain access to a permissions API that works cross-platform.

Next, set the interval (in milliseconds) that the code for tracking the user's current location is going to execute. In this case, we want it to execute every 30 minutes. Note that in the code below we're using the value of the location_status variable to check whether the permission to access the user's current location was granted or not. We'll be setting the value of this variable later, once the component is mounted:

Getting the Current Location

Get the current location by using Expo's Location API:

Next, using the Google Maps Geocoding API, make a request to the reverse geocoding endpoint by supplying the latitude and longitude values. This returns a formatted address based on those coordinates:

Sending the Location With Pusher

The next step is to send the location using Pusher. Later on, we're going to create the server which will serve as the auth endpoint and at the same time display the page which shows the user's current location.

Update the constructor to set a default value for the Pusher instance:

When the component is mounted, we want to initialize Pusher. You can now supply the Pusher API key and cluster from the setting of the Pusher app you created earlier:

Next, you can now add the code for sending the current location. In Pusher, this is done by calling the trigger() method. The first argument is the name of the event being triggered, and the second argument is an object containing the data you want to send. 

Later on, in the server, we'll subscribe to the same channel which we will subscribe to once the component is mounted. Then we'll bind to the client-location event so that every time it's triggered from somewhere, the server will also get notified (although only when the page it's serving is also subscribed to the same channel):

The only time we're going to ask for permission to access the user's current location is when the component is mounted. We will then update the location_status based on the user's selection. The value can either be "granted" or "denied". 

Remember that the code for checking the user's current location is executed periodically. This means that the new value of the location_status variable will also be used at a later time when the function is executed. After that, we also want to subscribe to the Pusher channel where the location updates will be sent:

Creating the Server

Now we're ready to create the server. First, create your working directory (ocdmom-server) outside of the project directory of the app. Navigate inside that directory and execute npm init. Just press Enter until it creates the package.json file.

Next, install the packages that we need:

Here's an overview of what each package does:

  • express: used for creating a server. This is responsible for serving the tracking page as well as responding to the auth endpoint.
  • body-parser: Express middleware which parses the request body and makes it available as a JavaScript object. 
  • pusher: used for communicating with the Pusher app you created earlier.

Once that's done, your package.json file should now look like this:

Create a server.js file and import the packages we just installed:

Configure the server to use the body-parser package and set the public folder as the static files directory:

Initialize Pusher. The values supplied here will come from the environment variables. We will add those later, when we deploy the server:

Serve the tracking page when the base URL is accessed:

Next, create the route for responding to requests to the auth endpoint. This will be hit every time the app initializes the connection to Pusher, as well as when the tracking page is accessed. What this does is authenticate the user so they can communicate to the Pusher app directly from the client side. 

Note that this doesn't really have any security measures in place. This means anyone can just make a request to your auth endpoint if they have access to your Pusher App key. In a production app, you'd want more robust security!

Lastly, make the server listen to the port specified in the environment variables. By default, it's port 80, but we're also setting it as an alternate value just in case it doesn't exist:

Tracking Page

The tracking page displays a map which gets updated every time the client-location event is triggered from the app. Don't forget to supply your Google API key:

Next, create a public/js/tracker.js file and add the following:

The function above extracts the query parameter from the URL. The unique code (the one displayed in the app) needs to be included as a query parameter when the base URL of the server is accessed on a browser. This allows us to keep track of the user's location because it will subscribe us to the same channel as the one subscribed to by the app.

Next, initialize Pusher. The code is similar to the code in the server earlier. The only difference is that we only need to specify the Pusher app key, auth endpoint, and cluster:

Check if the code is supplied as a query parameter, and only subscribe to the Pusher channel if it's supplied:

Add the function for initializing the map. This will display the map along with a marker pointing to the default location we've specified:

Bind to the client-location event. The callback function gets executed every time the app triggers a client-location event which has the same unique code as the one the user supplied as a query parameter:

Next, add the styles for the tracking page (public/css/style.css):

Deploying the Server

We'll be using Now to deploy the server. It's free for open-source projects.

Install Now globally:

Once it's installed, you can now add the Pusher app credentials as secrets. As mentioned earlier, Now is free for open-source projects. This means that once the server has been deployed, its source code will be available at the /_src path. This isn't really good because everyone can also see your Pusher app credentials. So what we'll do is add them as a secret so that they can be accessed as an environment variable. 

Remember the process.env.APP_ID or process.env.APP_KEY from the server code earlier? Those are being set as environment variables via secrets. pusher_app_id is the name assigned to the secret, and YOUR_PUSHER_APP_ID is the ID of your Pusher app. Execute the following commands to add your Pusher app credentials as secrets:

Once you've added those, you can now deploy the server. APP_ID is the name of the environment variable, and pusher_app_id is the name of the secret you want to access:

This is how it looks once it's done deploying. The URL it returns is the base URL of the server:

deploy server

Copy that URL over to the App.js file and save the changes:

At this point, the app should now be fully functional.

Conclusion

That's it! In this two-part series, you've learned how to detach an existing Expo project to ExpoKit. ExpoKit is a good way to use some of the tools that the Expo platform provides while your app is already converted to a standard native project. This allows you to use existing native modules for React Native and to create your own. 

While you're here, check out some of our other posts on React Native app development!

2018-03-26T13:18:25.000Z2018-03-26T13:18:25.000ZWern Ancheta

Detaching Expo Apps to ExpoKit

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-30698

In my Easier React Native Development With Expo post, you learned about how Expo makes it easier for beginners to begin creating apps with React Native. You also learned that Expo allows developers to get up and running with developing React Native apps faster because there's no longer a need to set up Android Studio, Xcode, or other development tools.

But as you have also seen, Expo doesn't support all of the native features that an app might need. Though the Expo team is always working to support more native functionality, it's a good idea to learn how to convert an existing Expo project to a standard native project so you can easily transition if the need arises. 

So, in this two-part series, we're taking a look at how to do that. In the first part of the series, you learned the basic concepts of ExpoKit. In this post, we'll continue where we left off by actually detaching the app to ExpoKit and continue coding the location-sharing app. 

Detaching to ExpoKit

In order to detach to ExpoKit, you first have to edit the app.json and package.json files. 

In the app.json file, make sure that a name has been set. The platforms should be the platforms you want to build to.

If you want to build for iOS, you must specify the ios option:

If you want to support Android, then also specify the following option:

There are other options that were prefilled by the exp command-line tool when the project was created. But the only important ones are the bundleIdentifier for iOS and package for Android. These will be the unique IDs for the app once they get published to the Apple or Play store. Detaching requires those details because it actually generates the native code for the app to be run on a device. You can find more information about the different configuration options for the app.json file in the Expo documentation.

You can view the full contents of the file in the GitHub repo.

Next, open the package.json file and add the name of the project:

This should be the name that you used when you created the project using exp init. It's very important that they are the same because the name you specify in the package.json is used when compiling the app. Inconsistencies in this name will cause an error.

Now we're ready to detach to ExpoKit. Execute the following command at the root of the project directory:

This will download the native Expo packages for Android and iOS locally.

You should see an output similar to the following if it succeeded:

Expo detach

If you're deploying to iOS, you need to install the latest version of Xcode. At the time of writing of this tutorial, the latest version is 9. Next, install CocoaPods by executing sudo gem install cocoapods. This allows you to install the native iOS dependencies of the project. Once that's done, navigate to the ios directory of the project and execute pod install to install all the native dependencies. 

Installing Custom Native Packages

Now that we have detached, we can now install native packages just like in a standard React Native project. 

For this app, we'll need the React Native Background Timer and Pusher packages.

First, install the Pusher package because it's easier:

This allows us to communicate with the Pusher app you created earlier.

Next, install the React Native Background Timer. This allows us to periodically execute code (even when the app is in the background) based on a specific interval:

Unlike the Pusher package, this requires a native library (either iOS or Android) to be linked to the app. Executing the following command does that for you:

Once it's done, it should also initialize the module on android/app/src/main/host/exp/exponent/MainApplication.java. But just to make sure, check if the following exists in that file:

If you're building for iOS, open the Podfile inside the ios directory and make sure the following is added before the post_install declaration:

Once that's done, execute pod install inside the ios directory to install the native module.

For Android, this is already done automatically when you run the app using Android Studio.

Update the Android Manifest File

If you're building for Android, open the Android manifest file (android/app/src/main/AndroidManifest.xml) and make sure the following permissions are added:

This allows us to ask permission for Pusher to access the internet and Expo to get the user's current location on Android devices. 

Running the App

We're not yet done, but it's better to run the app now so you can already see if it works or not. That way, you can also see the changes while we're developing the app.

The first step in running the app is to execute exp start from the root directory of the project. This will start the development server so that any change you make to the source code will get reflected in the app preview.

If you're building for Android, open Android Studio and select Open an existing Android Studio project. In the directory selector that shows up, select the android folder inside the Expo project. Once you've selected the folder, it should index the files in that folder. At that point, you should now be able to rebuild the project by selecting Build > Rebuild Project from the top menu. Once that's done, run the app by selecting Run > Run 'app'.

Android Studio can run the app on any Android device connected to your computer, on one of the emulators you installed via Android Studio, or via Genymotion (Android Studio automatically detects a running emulator instance). For this app, I recommend you use Genymotion emulator since it has a nice GPS emulation widget that allows you to change the location via a Google Maps interface:

Genymotion location emulation

(If you're having problems running the app on your device, be sure to check out this Stack Overflow question on getting Android Studio to recognize your device.)

Once that's done, open the ios/ocdmom.xcworkspace file with Xcode. Once Xcode is done indexing the files, you should be able to hit that big play button and it will automatically run the app on your selected iOS simulator.

Xcode also allows you to mock the location, but only when you build the app for running in the simulator. Making a change to the code and having the development server refresh the app won't actually change the location. To change the location, click on the send icon and select the location you want to use:

Xcode location simulation

Continue Coding the App

Now we're ready to continue writing the code for the app. This time, we'll be adding the functionality to run some code while the app is in the background.

Adding a Background Task

Import the Pusher and Background Timer package that you installed earlier:

Set the value for the Google API key of the Google project you created earlier:

Use the Location and Permissions API from Expo:

Expo's APIs work cross-platform—this is not unlike a standard React Native project where you have to install a package like React Native Permissions to gain access to a permissions API that works cross-platform.

Next, set the interval (in milliseconds) that the code for tracking the user's current location is going to execute. In this case, we want it to execute every 30 minutes. Note that in the code below we're using the value of the location_status variable to check whether the permission to access the user's current location was granted or not. We'll be setting the value of this variable later, once the component is mounted:

Getting the Current Location

Get the current location by using Expo's Location API:

Next, using the Google Maps Geocoding API, make a request to the reverse geocoding endpoint by supplying the latitude and longitude values. This returns a formatted address based on those coordinates:

Sending the Location With Pusher

The next step is to send the location using Pusher. Later on, we're going to create the server which will serve as the auth endpoint and at the same time display the page which shows the user's current location.

Update the constructor to set a default value for the Pusher instance:

When the component is mounted, we want to initialize Pusher. You can now supply the Pusher API key and cluster from the setting of the Pusher app you created earlier:

Next, you can now add the code for sending the current location. In Pusher, this is done by calling the trigger() method. The first argument is the name of the event being triggered, and the second argument is an object containing the data you want to send. 

Later on, in the server, we'll subscribe to the same channel which we will subscribe to once the component is mounted. Then we'll bind to the client-location event so that every time it's triggered from somewhere, the server will also get notified (although only when the page it's serving is also subscribed to the same channel):

The only time we're going to ask for permission to access the user's current location is when the component is mounted. We will then update the location_status based on the user's selection. The value can either be "granted" or "denied". 

Remember that the code for checking the user's current location is executed periodically. This means that the new value of the location_status variable will also be used at a later time when the function is executed. After that, we also want to subscribe to the Pusher channel where the location updates will be sent:

Creating the Server

Now we're ready to create the server. First, create your working directory (ocdmom-server) outside of the project directory of the app. Navigate inside that directory and execute npm init. Just press Enter until it creates the package.json file.

Next, install the packages that we need:

Here's an overview of what each package does:

  • express: used for creating a server. This is responsible for serving the tracking page as well as responding to the auth endpoint.
  • body-parser: Express middleware which parses the request body and makes it available as a JavaScript object. 
  • pusher: used for communicating with the Pusher app you created earlier.

Once that's done, your package.json file should now look like this:

Create a server.js file and import the packages we just installed:

Configure the server to use the body-parser package and set the public folder as the static files directory:

Initialize Pusher. The values supplied here will come from the environment variables. We will add those later, when we deploy the server:

Serve the tracking page when the base URL is accessed:

Next, create the route for responding to requests to the auth endpoint. This will be hit every time the app initializes the connection to Pusher, as well as when the tracking page is accessed. What this does is authenticate the user so they can communicate to the Pusher app directly from the client side. 

Note that this doesn't really have any security measures in place. This means anyone can just make a request to your auth endpoint if they have access to your Pusher App key. In a production app, you'd want more robust security!

Lastly, make the server listen to the port specified in the environment variables. By default, it's port 80, but we're also setting it as an alternate value just in case it doesn't exist:

Tracking Page

The tracking page displays a map which gets updated every time the client-location event is triggered from the app. Don't forget to supply your Google API key:

Next, create a public/js/tracker.js file and add the following:

The function above extracts the query parameter from the URL. The unique code (the one displayed in the app) needs to be included as a query parameter when the base URL of the server is accessed on a browser. This allows us to keep track of the user's location because it will subscribe us to the same channel as the one subscribed to by the app.

Next, initialize Pusher. The code is similar to the code in the server earlier. The only difference is that we only need to specify the Pusher app key, auth endpoint, and cluster:

Check if the code is supplied as a query parameter, and only subscribe to the Pusher channel if it's supplied:

Add the function for initializing the map. This will display the map along with a marker pointing to the default location we've specified:

Bind to the client-location event. The callback function gets executed every time the app triggers a client-location event which has the same unique code as the one the user supplied as a query parameter:

Next, add the styles for the tracking page (public/css/style.css):

Deploying the Server

We'll be using Now to deploy the server. It's free for open-source projects.

Install Now globally:

Once it's installed, you can now add the Pusher app credentials as secrets. As mentioned earlier, Now is free for open-source projects. This means that once the server has been deployed, its source code will be available at the /_src path. This isn't really good because everyone can also see your Pusher app credentials. So what we'll do is add them as a secret so that they can be accessed as an environment variable. 

Remember the process.env.APP_ID or process.env.APP_KEY from the server code earlier? Those are being set as environment variables via secrets. pusher_app_id is the name assigned to the secret, and YOUR_PUSHER_APP_ID is the ID of your Pusher app. Execute the following commands to add your Pusher app credentials as secrets:

Once you've added those, you can now deploy the server. APP_ID is the name of the environment variable, and pusher_app_id is the name of the secret you want to access:

This is how it looks once it's done deploying. The URL it returns is the base URL of the server:

deploy server

Copy that URL over to the App.js file and save the changes:

At this point, the app should now be fully functional.

Conclusion

That's it! In this two-part series, you've learned how to detach an existing Expo project to ExpoKit. ExpoKit is a good way to use some of the tools that the Expo platform provides while your app is already converted to a standard native project. This allows you to use existing native modules for React Native and to create your own. 

While you're here, check out some of our other posts on React Native app development!

2018-03-26T13:18:25.000Z2018-03-26T13:18:25.000ZWern Ancheta

Ionic From Scratch: Working With Ionic Components

$
0
0

What Are Ionic Components? 

Ionic components, for the most part, are what make your hybrid app come to life. Components provide the user interface for your app, and Ionic comes bundled with over 28 components. These will help you create an amazing first impression of your app. 

Of course, you can't use all of these 28 components in a single app. How to decide which ones to use?

Well, luckily there are components that you will find in almost every app. In theprevious article I showed you how to navigate to another view using the Ionic Component Button. All we needed to create this button was the following code:

Here, we're already using one of the Ionic components available to us. That’s the beauty of Ionic: we don't have to concern ourselves with how the button component is constructed, all we need to know is how to properly use the relevant component. 

When to Use Ionic Components? 

As a beginner, I doubt that there will ever be an app you develop that will not make use of Ionic components. It is also possible for you to develop your own custom components, but that is a topic for another day for advanced usage of Ionic and Angular.

With the above said, let’s have a look at how to use the most commonly used components in Ionic mobile applications:

Slides Component

The slides component normally serves as an intro for apps, and below is an image of its common usage:

Slides used in an intro for an app

List Component

Lists are one of the components you will also regularly use in your Ionic apps. Take a look at the screenshot below for an example.

Example of a list in an app

Adding Components to Your Project

Now that we've gathered a bit of info on Ionic components, let's try and put a few of these 'building blocks' together. Let’s go ahead and add some components to our Ionic project.

We will be using the project we created in the previous tutorial, and since home is our app's entry point, we will add slides to the home.html file to add our slides. We will do so by navigating to the home.html file in src/pages/home and making the following changes to the file:

As you can see above, we've added three slides using the slides component. This is inside <ion-slide>Content here...</ion-slide>. You can generate as many slides as you want, but for the purpose of this example, we've only created three.

We'll use another Ionic component: the list component. In order to do so, let's go ahead and generate a new page titled my-list. You should remember how to generate a new page from the previous tutorial using the following command: ionic generate page my-list.

With our newly created page added to our app, let's go ahead and navigate to my-list.html and edit the file as follows:

With the above code added to your app, you should be able to see a list with three items.  Now that's fine, but I'm sure you'd like to see some smooth scrolling with acceleration and deceleration when you scroll through the list, right? Well, that's easy to achieve—we just need a larger list!

Consider the following code inside the my-list.html file:

If you update your file with the longer list above, you will see scrolling with acceleration and deceleration. 

Now this is one way of creating a list in our project, but it'll seem pretty annoying if we'll need to create a list with even more items. That would mean writing <ion-item>...content...</ion-item> for each one. Luckily, there is a better way, and even as a beginner, you should try following this same method when working with large sums of data and information. 

The official Ionic documentation shows how to use a different approach for populating a list with items:

The magic in the code above is the use of the Angular directive: *ngFor. We won't be diving deeper into what this directive is and what it does, but in short, it iterates over a collection of data, allowing us to build data presentation lists and tables in our app. items is a variable that contains our data, and item is filled in with each item in that list. If you want to learn more about this directive, please take a look at the official Angular documentation.

With that knowledge, let's improve our project with the *ngFor directive. Edit the my-list.html file to reflect the following:

A lot of things are happening here. The <ion-list> contains a series of <ion-avatar> components. The item-start attribute means that the avatar will be aligned to the right-hand side. Each list item also contains a header tag (<h2>) and a paragraph tag (<p>).

So, basically, you can also add additional components inside the list component. Have a look at another great example of how to achieve this in the List In Cards example from the Ionic docs. Again, implementing *ngFor in that example will be of benefit.

Now, back to our code, our item in items contains title, subTitle, and image. Let's go ahead and make the following changes inside our my-list.ts file:

In the example above, we are populating our items inside our constructor file, my-list.ts. These will be displayed inside our page template, the my-list.html file. This data can come from any source: a local database, user input, or an external REST API. But for the sake of this example, we are just hard-coding the data.

Conclusion

Although we didn't cover all of the Ionic components, the same principles apply to the others. I would like to encourage you to play around and test the rest of the components and start getting familiar with using them. As I mentioned right at the beginning, these components are going to be the building blocks of every Ionic application you'll ever build!

In the meantime, check out some of our other posts on Ionic app development.

2018-03-26T18:23:02.404Z2018-03-26T18:23:02.404ZTinashe Munyaka

Ionic From Scratch: Working With Ionic Components

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-30779

What Are Ionic Components? 

Ionic components, for the most part, are what make your hybrid app come to life. Components provide the user interface for your app, and Ionic comes bundled with over 28 components. These will help you create an amazing first impression of your app. 

Of course, you can't use all of these 28 components in a single app. How to decide which ones to use?

Well, luckily there are components that you will find in almost every app. In theprevious article I showed you how to navigate to another view using the Ionic Component Button. All we needed to create this button was the following code:

Here, we're already using one of the Ionic components available to us. That’s the beauty of Ionic: we don't have to concern ourselves with how the button component is constructed, all we need to know is how to properly use the relevant component. 

When to Use Ionic Components? 

As a beginner, I doubt that there will ever be an app you develop that will not make use of Ionic components. It is also possible for you to develop your own custom components, but that is a topic for another day for advanced usage of Ionic and Angular.

With the above said, let’s have a look at how to use the most commonly used components in Ionic mobile applications:

Slides Component

The slides component normally serves as an intro for apps, and below is an image of its common usage:

Slides used in an intro for an app

List Component

Lists are one of the components you will also regularly use in your Ionic apps. Take a look at the screenshot below for an example.

Example of a list in an app

Adding Components to Your Project

Now that we've gathered a bit of info on Ionic components, let's try and put a few of these 'building blocks' together. Let’s go ahead and add some components to our Ionic project.

We will be using the project we created in the previous tutorial, and since home is our app's entry point, we will add slides to the home.html file to add our slides. We will do so by navigating to the home.html file in src/pages/home and making the following changes to the file:

As you can see above, we've added three slides using the slides component. This is inside <ion-slide>Content here...</ion-slide>. You can generate as many slides as you want, but for the purpose of this example, we've only created three.

We'll use another Ionic component: the list component. In order to do so, let's go ahead and generate a new page titled my-list. You should remember how to generate a new page from the previous tutorial using the following command: ionic generate page my-list.

With our newly created page added to our app, let's go ahead and navigate to my-list.html and edit the file as follows:

With the above code added to your app, you should be able to see a list with three items.  Now that's fine, but I'm sure you'd like to see some smooth scrolling with acceleration and deceleration when you scroll through the list, right? Well, that's easy to achieve—we just need a larger list!

Consider the following code inside the my-list.html file:

If you update your file with the longer list above, you will see scrolling with acceleration and deceleration. 

Now this is one way of creating a list in our project, but it'll seem pretty annoying if we'll need to create a list with even more items. That would mean writing <ion-item>...content...</ion-item> for each one. Luckily, there is a better way, and even as a beginner, you should try following this same method when working with large sums of data and information. 

The official Ionic documentation shows how to use a different approach for populating a list with items:

The magic in the code above is the use of the Angular directive: *ngFor. We won't be diving deeper into what this directive is and what it does, but in short, it iterates over a collection of data, allowing us to build data presentation lists and tables in our app. items is a variable that contains our data, and item is filled in with each item in that list. If you want to learn more about this directive, please take a look at the official Angular documentation.

With that knowledge, let's improve our project with the *ngFor directive. Edit the my-list.html file to reflect the following:

A lot of things are happening here. The <ion-list> contains a series of <ion-avatar> components. The item-start attribute means that the avatar will be aligned to the right-hand side. Each list item also contains a header tag (<h2>) and a paragraph tag (<p>).

So, basically, you can also add additional components inside the list component. Have a look at another great example of how to achieve this in the List In Cards example from the Ionic docs. Again, implementing *ngFor in that example will be of benefit.

Now, back to our code, our item in items contains title, subTitle, and image. Let's go ahead and make the following changes inside our my-list.ts file:

In the example above, we are populating our items inside our constructor file, my-list.ts. These will be displayed inside our page template, the my-list.html file. This data can come from any source: a local database, user input, or an external REST API. But for the sake of this example, we are just hard-coding the data.

Conclusion

Although we didn't cover all of the Ionic components, the same principles apply to the others. I would like to encourage you to play around and test the rest of the components and start getting familiar with using them. As I mentioned right at the beginning, these components are going to be the building blocks of every Ionic application you'll ever build!

In the meantime, check out some of our other posts on Ionic app development.

2018-03-26T18:23:02.404Z2018-03-26T18:23:02.404ZTinashe Munyaka

Continuous Delivery With fastlane for iOS

$
0
0

Introduction

iOS developers have been fortunate enough to enjoy and work with the robust development platform that Apple has provided, primarily Xcode. This has helped inspire the engaging and powerful apps that consumers enjoy on the App Store today. Xcode provides an intuitive IDE and that, coupled with the emergence of Swift as a truly modern programming language, has made programming on the platform sheer enjoyment.

However, while the development aspect of the workflow is cohesive, the workflow breaks down when it comes to the chores involved in dealing with code signing and distributing apps. This has been a long-standing problem for the platform, and while it has improved incrementally, it is still a bottleneck for almost all developers. This has in many respects stifled continuous delivery of apps—that is to say, the need for manual building and distribution of apps daily internally and externally is error-prone and laborious. 

That's where fastlane comes in. The fastlane suite of tools makes distributing apps much easier, allowing developers to focus on their apps and let the tooling take on tasks like managing provisioning profiles and certificates and building, packaging and distributing apps. One of fastlane's toolchains is a client-side automated Continuous Delivery turnkey solution that iOS developers can leverage to ensure their apps get tested and validated continuously by others, with minimal human intervention. 

The fastlane logo

Developed by Felix Krause (@krausefx), fastlane consists of an open-source suite of tools that unifies the automation of building and deploying iOS apps via the command line, as well as integrating with various third-party libraries in addition to Apple’s own APIs. As somewhat of a cult toolchain amongst iOS developers, and backed by Google, fastlane will save you lots of time by automating a lot of your manual daily and weekly tasks.

In this tutorial, we are going to explore two very popular features of fastlane: code signing and packaging/distributing apps.

Objectives of This Tutorial

This tutorial will introduce you to the fastlane toolchain and show you how to leverage the tool to automate and optimize your iOS development workflow. You will learn:

  • the basics of getting started with fastlane
  • code signing your app
  • packaging and distributing your app

Assumed Knowledge

This tutorial assumes you have a working knowledge of Swift and iOS development, although you won't be doing any Swift coding in this tutorial. You'll be using the command prompt to build and run fastlane commands. 

Getting Started With fastlane

The fastlane toolchain is essentially written in Ruby and connects to the Apple Developer Center and iTunes Connect API via the spaceship Ruby library, authenticating and authorizing users securely. It operates around the creation of a configuration file, called a Fastfile, which you can think of as a recipe file where you set the actions you want to be performed when building your app. These actions are organized into "lanes". For example, you would configure a lane for deploying to the App Store, and another lane for distributing to TestFlight. A lane could be composed of the following individual actions:

  1. building your project
  2. incrementing the build number
  3. running unit tests
  4. sending your .ipa to TestFlight
  5. sending a Slack message to your team

You can think of lanes as functions which group related tasks. You can even call lanes methods from another one, to further decouple and reuse your lanes. 

But before we dive into the fastlane actions, you will need to set up your environment to use fastlane.

Setting Up fastlane

Make sure you have the latest version of Xcode installed. You will also need to have Xcode Tools installed on your system. You can check whether Xcode Tools is installed by entering the following in the terminal:

If you get back the full path to your developer folder then you are ready to go. You should see something like the following.

Otherwise, let's get the latest version of the Xcode command line tools by typing the following, in terminal:

You should get a prompt similar to the following: 

gcc command prompt

Next, you are going to need to install Homebrew. Homebrew is a powerful package manager that lets you install hundreds of open-source tools, fastlane among them. To install Homebrew, type the following in the command line:

Once Homebrew is set up, you can install fastlane by entering:

Note that if you prefer not to install Homebrew, you can install fastlane directly in the terminal via ruby, by entering the following:

To confirm fastlane is installed and ready on your system, enter the following command in the terminal to initialize a new fastlane configuration file:

You will be prompted to enter your Apple ID so that fastlane can hook into iTunes Connect seamlessly. Fill out any other prompt questions and you will see a new /fastlane sub-directory created for you. You will be primarily concerned with the /fastlane/Fastfile configuration file, where you will orchestrate all of your fastlane actions. 

Take a quick look at the file. You will be working with it over the next few sections, starting with configuring fastlane to code sign your apps. 

Code Signing Your Apps

One of the most popular features of fastlane as a toolchain is in being able to code sign your apps automatically, avoiding the ordeal of having to deal with certificates and provisioning profiles. This can be a lot of work and, moreover, when you change machines or onboard a new team member, you have to do it all over again. Fastlane provides three actions that help you manage your code signing: certsigh, and match

cert, while useful on its own, usually works in tandem with sigh to complete the process of code signing your app, by managing your certificates and provisioning profiles respectively. It not only creates the certificate for you but will automatically generate a new private key signing request when needed, as well as retrieving and installing the certificate into your keychain, making sure that your certificate is valid each time you run cert. 

sigh creates the corresponding provisioning profile for your certificate for either development, Ad Hoc, or the App Store. Like cert, sigh ensures this provisioning profile stays current, retrieved, and installed into your keychain. Together, these two form the code-signing pair for your app. But before you learn how to use cert and sigh, there is one more related action I want to introduce: match

match combines the two previous actions but allows you to share your code signing identity across your team, or across multiple machines, securely and privately through your own GitHub repository, creating all the necessary certificates and profiles so that new members can get those credentials by simply calling the fastlane command match. For more information on the concept of match, consult the new approach to code signing guide.

To starting using cert, run the following command: 

Similarly, to run sigh:

If you want to see a list of options cert or sigh provides, you can do so with the following commands:

Besides running those commands ad hoc, you can include either or both actions as part of your automated workflow, within the Fastfile configuration file I mentioned earlier:

You can then run the entire lane by issuing the following command:

With two simple lines, you now benefit from having your certificates and provisioning profiles created for you and maintained automatically. Next, you will dive into how fastlane can help package and distribute your apps.

Package & Distribute Your Apps 

The next two actions you will learn about are gym and deliver, which you will leverage to build, package, and distribute your app to TestFlight or the App Store. gym builds and packages your app via a single command line, generating a signed ipa file for you. 

By automating this as part of your Fastfile, you can trigger building and packaging through a continuous integration workflow and have the latest version of your app in the hands of users daily or hourly. To package your app through fastlane, you can simply run:

To specify a particular workspace and scheme, you can add the following optional parameters:

Just like Fastfile, fastlane provides a convenient configuration file called a Gymfile where you can store all of your build-specific commands, such as your workspace, schemes and more, saving you having to type that out each time or expose it within your Fastfile configuration file. To create a new Gymfile, simply enter the following:

You can then edit that file and enter the configuration parameters necessary for your project.

Next up, deliver takes over where gym left off, by distributing your .ipa file without you having to go through Xcode. As the counterpart to gym, deliver is not only capable of delivering your .ipa binary file but also uploads your screenshots and metadata to iTunes Connect for you, as well as submitting your app to the App Store. 

The action can also download your existing screenshots and metadata from iTunes Connect. The simplest action command is to call:

The end result is a metadata and screenshots folder along with the configuration file Deliverfile, which is similar to the Gymfile and Fastfile

The metadata and screenshots subfolders generated for you by deliver

You can then modify the contents of those subfolders to modify your app metadata. To upload your metadata, you run:

To submit your app for review, append the following parameters:

To download metadata or screenshots, add the following parameters, respectively:

You can modify your screenshots within the /screenshots sub-folder, but fastlane provides another action that can automate the process of generating screenshots for you. Although screenshots is outside the scope of this article, you can learn about it in a future fastlane article. 

As with the previous set of actions, you can either run gym and deliver on your own or include it as part of your automation workflow, within Fastfile:

Conclusion

By enabling continuous delivery through automation, fastlane takes the burden of labor off iOS developers with a one-click turnkey solution for building, packaging, distributing, code signing, screenshot generation and much more. This tutorial just scratches the surface of what's possible with fastlane, and in subsequent articles we will explore more actions you can implement to further automate and optimize your workflow. 

From generating localized screenshots to its deep integration with prominent tools such as Jenkins CI, Git, Crashlytics, HockeyApp, and Slack, fastlane is as essential as CocoaPods for your development iOS toolkit. It has a robust community of third-party plugins and is also backed by Google.

You can also learn all about the art of continuous delivery and fastlane in my book Continuous Delivery for Mobile With fastlane, available from Packt Publishing.

Continuous Delivery for Mobile With fastlane

And while you're here, check out some of our other posts on iOS app development!

2018-03-28T00:41:17.000Z2018-03-28T00:41:17.000ZDoron Katz

Continuous Delivery With fastlane for iOS

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-30706

Introduction

iOS developers have been fortunate enough to enjoy and work with the robust development platform that Apple has provided, primarily Xcode. This has helped inspire the engaging and powerful apps that consumers enjoy on the App Store today. Xcode provides an intuitive IDE and that, coupled with the emergence of Swift as a truly modern programming language, has made programming on the platform sheer enjoyment.

However, while the development aspect of the workflow is cohesive, the workflow breaks down when it comes to the chores involved in dealing with code signing and distributing apps. This has been a long-standing problem for the platform, and while it has improved incrementally, it is still a bottleneck for almost all developers. This has in many respects stifled continuous delivery of apps—that is to say, the need for manual building and distribution of apps daily internally and externally is error-prone and laborious. 

That's where fastlane comes in. The fastlane suite of tools makes distributing apps much easier, allowing developers to focus on their apps and let the tooling take on tasks like managing provisioning profiles and certificates and building, packaging and distributing apps. One of fastlane's toolchains is a client-side automated Continuous Delivery turnkey solution that iOS developers can leverage to ensure their apps get tested and validated continuously by others, with minimal human intervention. 

The fastlane logo

Developed by Felix Krause (@krausefx), fastlane consists of an open-source suite of tools that unifies the automation of building and deploying iOS apps via the command line, as well as integrating with various third-party libraries in addition to Apple’s own APIs. As somewhat of a cult toolchain amongst iOS developers, and backed by Google, fastlane will save you lots of time by automating a lot of your manual daily and weekly tasks.

In this tutorial, we are going to explore two very popular features of fastlane: code signing and packaging/distributing apps.

Objectives of This Tutorial

This tutorial will introduce you to the fastlane toolchain and show you how to leverage the tool to automate and optimize your iOS development workflow. You will learn:

  • the basics of getting started with fastlane
  • code signing your app
  • packaging and distributing your app

Assumed Knowledge

This tutorial assumes you have a working knowledge of Swift and iOS development, although you won't be doing any Swift coding in this tutorial. You'll be using the command prompt to build and run fastlane commands. 

Getting Started With fastlane

The fastlane toolchain is essentially written in Ruby and connects to the Apple Developer Center and iTunes Connect API via the spaceship Ruby library, authenticating and authorizing users securely. It operates around the creation of a configuration file, called a Fastfile, which you can think of as a recipe file where you set the actions you want to be performed when building your app. These actions are organized into "lanes". For example, you would configure a lane for deploying to the App Store, and another lane for distributing to TestFlight. A lane could be composed of the following individual actions:

  1. building your project
  2. incrementing the build number
  3. running unit tests
  4. sending your .ipa to TestFlight
  5. sending a Slack message to your team

You can think of lanes as functions which group related tasks. You can even call lanes methods from another one, to further decouple and reuse your lanes. 

But before we dive into the fastlane actions, you will need to set up your environment to use fastlane.

Setting Up fastlane

Make sure you have the latest version of Xcode installed. You will also need to have Xcode Tools installed on your system. You can check whether Xcode Tools is installed by entering the following in the terminal:

If you get back the full path to your developer folder then you are ready to go. You should see something like the following.

Otherwise, let's get the latest version of the Xcode command line tools by typing the following, in terminal:

You should get a prompt similar to the following: 

gcc command prompt

Next, you are going to need to install Homebrew. Homebrew is a powerful package manager that lets you install hundreds of open-source tools, fastlane among them. To install Homebrew, type the following in the command line:

Once Homebrew is set up, you can install fastlane by entering:

Note that if you prefer not to install Homebrew, you can install fastlane directly in the terminal via ruby, by entering the following:

To confirm fastlane is installed and ready on your system, enter the following command in the terminal to initialize a new fastlane configuration file:

You will be prompted to enter your Apple ID so that fastlane can hook into iTunes Connect seamlessly. Fill out any other prompt questions and you will see a new /fastlane sub-directory created for you. You will be primarily concerned with the /fastlane/Fastfile configuration file, where you will orchestrate all of your fastlane actions. 

Take a quick look at the file. You will be working with it over the next few sections, starting with configuring fastlane to code sign your apps. 

Code Signing Your Apps

One of the most popular features of fastlane as a toolchain is in being able to code sign your apps automatically, avoiding the ordeal of having to deal with certificates and provisioning profiles. This can be a lot of work and, moreover, when you change machines or onboard a new team member, you have to do it all over again. Fastlane provides three actions that help you manage your code signing: certsigh, and match

cert, while useful on its own, usually works in tandem with sigh to complete the process of code signing your app, by managing your certificates and provisioning profiles respectively. It not only creates the certificate for you but will automatically generate a new private key signing request when needed, as well as retrieving and installing the certificate into your keychain, making sure that your certificate is valid each time you run cert. 

sigh creates the corresponding provisioning profile for your certificate for either development, Ad Hoc, or the App Store. Like cert, sigh ensures this provisioning profile stays current, retrieved, and installed into your keychain. Together, these two form the code-signing pair for your app. But before you learn how to use cert and sigh, there is one more related action I want to introduce: match

match combines the two previous actions but allows you to share your code signing identity across your team, or across multiple machines, securely and privately through your own GitHub repository, creating all the necessary certificates and profiles so that new members can get those credentials by simply calling the fastlane command match. For more information on the concept of match, consult the new approach to code signing guide.

To starting using cert, run the following command: 

Similarly, to run sigh:

If you want to see a list of options cert or sigh provides, you can do so with the following commands:

Besides running those commands ad hoc, you can include either or both actions as part of your automated workflow, within the Fastfile configuration file I mentioned earlier:

You can then run the entire lane by issuing the following command:

With two simple lines, you now benefit from having your certificates and provisioning profiles created for you and maintained automatically. Next, you will dive into how fastlane can help package and distribute your apps.

Package & Distribute Your Apps 

The next two actions you will learn about are gym and deliver, which you will leverage to build, package, and distribute your app to TestFlight or the App Store. gym builds and packages your app via a single command line, generating a signed ipa file for you. 

By automating this as part of your Fastfile, you can trigger building and packaging through a continuous integration workflow and have the latest version of your app in the hands of users daily or hourly. To package your app through fastlane, you can simply run:

To specify a particular workspace and scheme, you can add the following optional parameters:

Just like Fastfile, fastlane provides a convenient configuration file called a Gymfile where you can store all of your build-specific commands, such as your workspace, schemes and more, saving you having to type that out each time or expose it within your Fastfile configuration file. To create a new Gymfile, simply enter the following:

You can then edit that file and enter the configuration parameters necessary for your project.

Next up, deliver takes over where gym left off, by distributing your .ipa file without you having to go through Xcode. As the counterpart to gym, deliver is not only capable of delivering your .ipa binary file but also uploads your screenshots and metadata to iTunes Connect for you, as well as submitting your app to the App Store. 

The action can also download your existing screenshots and metadata from iTunes Connect. The simplest action command is to call:

The end result is a metadata and screenshots folder along with the configuration file Deliverfile, which is similar to the Gymfile and Fastfile

The metadata and screenshots subfolders generated for you by deliver

You can then modify the contents of those subfolders to modify your app metadata. To upload your metadata, you run:

To submit your app for review, append the following parameters:

To download metadata or screenshots, add the following parameters, respectively:

You can modify your screenshots within the /screenshots sub-folder, but fastlane provides another action that can automate the process of generating screenshots for you. Although screenshots is outside the scope of this article, you can learn about it in a future fastlane article. 

As with the previous set of actions, you can either run gym and deliver on your own or include it as part of your automation workflow, within Fastfile:

Conclusion

By enabling continuous delivery through automation, fastlane takes the burden of labor off iOS developers with a one-click turnkey solution for building, packaging, distributing, code signing, screenshot generation and much more. This tutorial just scratches the surface of what's possible with fastlane, and in subsequent articles we will explore more actions you can implement to further automate and optimize your workflow. 

From generating localized screenshots to its deep integration with prominent tools such as Jenkins CI, Git, Crashlytics, HockeyApp, and Slack, fastlane is as essential as CocoaPods for your development iOS toolkit. It has a robust community of third-party plugins and is also backed by Google.

You can also learn all about the art of continuous delivery and fastlane in my book Continuous Delivery for Mobile With fastlane, available from Packt Publishing.

Continuous Delivery for Mobile With fastlane

And while you're here, check out some of our other posts on iOS app development!

2018-03-28T00:41:17.000Z2018-03-28T00:41:17.000ZDoron Katz

How to Create an App

$
0
0

There are several ways to create a mobile application. Do you want to know what the best way is? It depends. What technologies do you have experience with? What platforms are you targeting? How much time do you want to spend building your application?

After the introduction of the iPhone and it's software development kit, the mobile space went through a revolution. Today, there are millions of mobile applications, countless platforms, and dozens of frameworks and tools to create mobile applications.

How do you decide what is right for you? Answering that question is the focus of this article. I discuss the types of mobile applications you find in the wild, the advantages of native and hybrid applications, and I list some of the more popular platforms.

Application Types

Mobile applications can be broken down into three broad categories:

  • web applications
  • hybrid applications
  • native applications

Each of these types has its pros and cons. If you were to ask me which type best fits your needs then my answer would be that it really depends on exactly what you're trying to do. In order to justify my answer, I first need to tell you about each application type. Let's start with web applications.

Web Applications

You may already be familiar with web applications. Unlike other apps, a web application isn't something you can download from somewhere, it's simply available on any device which can load web pages (or has a web browser). A web application is nothing more than a website, acting and behaving as an application. Before the introduction of the iOS SDK, for example, web applications were the only option for developers wanting to create applications for the original iPhone.

Web applications have a number of distinct advantages, the most important one being development time. Because a web application is a website, it is built once and accessible on every platform that runs a web browser. For some companies, this is a very appealing solution since native development, which we discuss in a moment, can be costly and time-consuming. You could say that web applications are by definition cross-platform.

Another important advantage is the learning curve for developing web applications. To develop a web application, you rely on web technologies that you may already have experience with, such as HTML, CSS, and JavaScript. If you do, then you will be up and running in no time without the use of any proprietary SDKs.

Are there any downsides?

As with any types of applications, there are a few downsides of web development. The two most important drawbacks are performance and access to device capabilities. If you are planning to develop a game, then a web application isn't your best option. It is possible, but performance won't be stellar. Websites and web applications have limited access to the capabilities of the device, such as the camera, location services, etc. This has significantly improved over the years, but it isn't up to par with native applications.

If you want to limit the development and maintenance costs of your mobile application and performance isn't the most important aspect, then a web application is certainly worth considering.

If you want to understand what web applications are about and how to get started, check out some of our other content here on Envato Tuts+:

Hybrid Applications

Hybrid applications were and still are incredibly popular. They combine some of the best things of both worlds, that is, web and native. The technologies used to create hybrid applications are identical to the ones used to create web applications, HTML, CSS, and JavaScript. The reason is obvious if you understand how hybrid applications work.

A hybrid application is a web application that runs in a web view of a native application. Put differently, a hybrid application uses a native application as its container to make it look like a native application. This means that, to the user, a hybrid application looks and feels native, more or less. The user can download it from the platform's mobile store and the application icon appears on their home screen.

Any downsides? Because hybrid applications rely on web technologies and run in a web view, hybrid applications suffer from most of the same problems web applications do. Performance is not up to par with native applications. This is improving, though, every year performance gets better and better. It is impressive how JavaScript performance has improved during the past decade.

The most popular solution for developing hybrid applications is Apache Cordova, Cordova for short. When Adobe acquired PhoneGap a few years ago, they open sourced most of the code base and Cordova was born. PhoneGap still exists and is Cordova's commercial cousin.

To speed up development, developers often use Cordova in combination with other frameworks, such as Ionic and Onsen UI.

To learn more about hybrid applications, I recommend checking out some of the tutorials we have published on Cordova:

Native Applications

Choosing for native development is choosing for performance and reliability. Why does native development scare off so many developers? Let's take the iOS platform as an example. If you want to build a native iOS application, you need to learn a new language, Objective-C or Swift. You also need to become familiar with Xcode, Apple's IDE (Integrated Development Environment). A native application generally takes longer to build simply because you are working closer to the metal so to speak. Objective-C and JavaScript are two very different languages.

What do you get in return? Performance is probably the most compelling advantage of native applications. Native applications feel snappy and, especially for games, they can take full advantage of the resources of the device and operating system. Every feature and capability of the device that is exposed through the SDK's APIs is accessible to the developer. This is another key advantage native has over hybrid and web.

There are a number of hybrid approaches that make native application development accessible to more developers. The solution is simple, write code in the language of your choice and compile it to a native application. The most popular solutions at the time of writing are Xamarin and React Native.

Xamarin lets developers write native applications for iOS, Android, and Windows Phone using C#. The Xamarin tools leverage the Mono open source project. React Native has its roots at Facebook and enables developers to write native applications using JavaScript.

Envato Tuts+ covers a broad range of platforms, including iOS, Android, Xamarin, and React Native. Take a look at these tutorials to become familiar with them:

Native or Hybrid

What is the best solution? Native? Or hybrid? Or a web application? There is no one answer. It depends on several factors. If you are a developer, then the answer is less complicated. What technologies are you already familiar with? Do you want to focus on one platform or create applications for multiple platforms?

It is becoming a true challenge to stay on top of iOS, Android, and Windows Phone. Some developers write native applications for multiple platforms, but it is challenging and if you can do it, I definitely recommend it. The mobile space evolves at a rapid pace, and if you choose native development, then that should be your goal, to become very, very familiar with the platform you're targeting.

This is one of the reasons many developers choose a hybrid solution. If you are a seasoned web developer, then you will be up and running in no time. Apache Cordova, in combination with Ionic or Onsen UI, can speed up development significantly.

Web applications are certainly something to keep in mind. They are a different category, though. By creating a web application, you have no intention of having an application in any of the mobile stores. Many companies chose this path several years ago. Nowadays, if the budget and resources are available, native and hybrid approaches are more popular.


Ask Yourself

To decide what approach you'd like to take in developing your mobile application, ask yourself these questions. As I mentioned earlier, choosing the type of

Is performance critical?

If so, native is your best option. You may also want to look into Unreal Engine if you plan to develop a game. As opposed to Unity, they only charge you 5% of the revenue you make instead of a monthly fee.

When we discussed native applications, we saw that they're able to utilize the device capabilities, and a good example of this is how Apple takes graphics performance to the next level using Metal by giving developers almost full access to the GPU on the device.

If you aren't familiar with Metal, here's what Apple has to say about it:

Metal 2 provides near-direct access to the graphics processing unit (GPU), enabling you to maximize the graphics and compute potential of your apps on iOS, macOS, and tvOS.

Now, if performance doesn't matter, you're free to choose cross-platform or hybrid apps, whichever suit your needs better based on the following few questions. Anyway, though, if you have the resources to build native apps, that's always the best option overall.

Is cross-platform support important?

If not, you should consider the native route. Native apps, as mentioned above, are definitely the best way to go for a solid user experience and best performance possible. 

If cross-platform support is vital, and you cannot get native apps for all your platforms, then a hybrid native or hybrid web approach is your best bet. 

Take a look at Xamarin or React Native if performance and device capabilities are equally important. My personal favorite, though, is Ionic because it gives a good deal of on-device functionalities while still having a wide enough umbrella to cover most development

Is this your next big thing?

If your goal is to become a mobile developer, then my suggestion would be to choose a native approach. This is very personal, though. I am an iOS developer and I like it this way; I think Apple is disrupting the mobile development field, and it gives you a lot of tools as a developer. By focusing on one platform, in my case, iOS, tvOS, watchOS, and macOS, I have the time to become very familiar with the platform. This is an important aspect of mobile development if you want to create compelling applications and a great user experience.

If you have a background in web development, then native is still an option. However, if you want to get something out the door quickly, then a hybrid or web app is the fastest solution.


Conclusion

If you were expecting a clear-cut answer, then I may have disappointed you. If you haven't made up your mind yet, then I suggest giving some of the options I listed a try. Play around with Cordova or take a learn about Swift and see how you like it. Don't take the easiest or quickest path to your goal. Make sure you also enjoy the journey because that's where the fun is.

If you already know what approach is the best for you, then you may want to speed up the development of your next application with some of these iOS and Android templates. Check them out on Envato Market:

2018-04-11T19:51:16.497Z2018-04-11T19:51:16.497ZVardhan Agrawal

How to Create an App

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-25757

There are several ways to create a mobile application. Do you want to know what the best way is? It depends. What technologies do you have experience with? What platforms are you targeting? How much time do you want to spend building your application?

After the introduction of the iPhone and it's software development kit, the mobile space went through a revolution. Today, there are millions of mobile applications, countless platforms, and dozens of frameworks and tools to create mobile applications.

How do you decide what is right for you? Answering that question is the focus of this article. I discuss the types of mobile applications you find in the wild, the advantages of native and hybrid applications, and I list some of the more popular platforms.

Application Types

Mobile applications can be broken down into three broad categories:

  • web applications
  • hybrid applications
  • native applications

Each of these types has its pros and cons. If you were to ask me which type best fits your needs then my answer would be that it really depends on exactly what you're trying to do. In order to justify my answer, I first need to tell you about each application type. Let's start with web applications.

Web Applications

You may already be familiar with web applications. Unlike other apps, a web application isn't something you can download from somewhere, it's simply available on any device which can load web pages (or has a web browser). A web application is nothing more than a website, acting and behaving as an application. Before the introduction of the iOS SDK, for example, web applications were the only option for developers wanting to create applications for the original iPhone.

Web applications have a number of distinct advantages, the most important one being development time. Because a web application is a website, it is built once and accessible on every platform that runs a web browser. For some companies, this is a very appealing solution since native development, which we discuss in a moment, can be costly and time-consuming. You could say that web applications are by definition cross-platform.

Another important advantage is the learning curve for developing web applications. To develop a web application, you rely on web technologies that you may already have experience with, such as HTML, CSS, and JavaScript. If you do, then you will be up and running in no time without the use of any proprietary SDKs.

Are there any downsides?

As with any types of applications, there are a few downsides of web development. The two most important drawbacks are performance and access to device capabilities. If you are planning to develop a game, then a web application isn't your best option. It is possible, but performance won't be stellar. Websites and web applications have limited access to the capabilities of the device, such as the camera, location services, etc. This has significantly improved over the years, but it isn't up to par with native applications.

If you want to limit the development and maintenance costs of your mobile application and performance isn't the most important aspect, then a web application is certainly worth considering.

If you want to understand what web applications are about and how to get started, check out some of our other content here on Envato Tuts+:

Hybrid Applications

Hybrid applications were and still are incredibly popular. They combine some of the best things of both worlds, that is, web and native. The technologies used to create hybrid applications are identical to the ones used to create web applications, HTML, CSS, and JavaScript. The reason is obvious if you understand how hybrid applications work.

A hybrid application is a web application that runs in a web view of a native application. Put differently, a hybrid application uses a native application as its container to make it look like a native application. This means that, to the user, a hybrid application looks and feels native, more or less. The user can download it from the platform's mobile store and the application icon appears on their home screen.

Any downsides? Because hybrid applications rely on web technologies and run in a web view, hybrid applications suffer from most of the same problems web applications do. Performance is not up to par with native applications. This is improving, though, every year performance gets better and better. It is impressive how JavaScript performance has improved during the past decade.

The most popular solution for developing hybrid applications is Apache Cordova, Cordova for short. When Adobe acquired PhoneGap a few years ago, they open sourced most of the code base and Cordova was born. PhoneGap still exists and is Cordova's commercial cousin.

To speed up development, developers often use Cordova in combination with other frameworks, such as Ionic and Onsen UI.

To learn more about hybrid applications, I recommend checking out some of the tutorials we have published on Cordova:

Native Applications

Choosing for native development is choosing for performance and reliability. Why does native development scare off so many developers? Let's take the iOS platform as an example. If you want to build a native iOS application, you need to learn a new language, Objective-C or Swift. You also need to become familiar with Xcode, Apple's IDE (Integrated Development Environment). A native application generally takes longer to build simply because you are working closer to the metal so to speak. Objective-C and JavaScript are two very different languages.

What do you get in return? Performance is probably the most compelling advantage of native applications. Native applications feel snappy and, especially for games, they can take full advantage of the resources of the device and operating system. Every feature and capability of the device that is exposed through the SDK's APIs is accessible to the developer. This is another key advantage native has over hybrid and web.

There are a number of hybrid approaches that make native application development accessible to more developers. The solution is simple, write code in the language of your choice and compile it to a native application. The most popular solutions at the time of writing are Xamarin and React Native.

Xamarin lets developers write native applications for iOS, Android, and Windows Phone using C#. The Xamarin tools leverage the Mono open source project. React Native has its roots at Facebook and enables developers to write native applications using JavaScript.

Envato Tuts+ covers a broad range of platforms, including iOS, Android, Xamarin, and React Native. Take a look at these tutorials to become familiar with them:

Native or Hybrid

What is the best solution? Native? Or hybrid? Or a web application? There is no one answer. It depends on several factors. If you are a developer, then the answer is less complicated. What technologies are you already familiar with? Do you want to focus on one platform or create applications for multiple platforms?

It is becoming a true challenge to stay on top of iOS, Android, and Windows Phone. Some developers write native applications for multiple platforms, but it is challenging and if you can do it, I definitely recommend it. The mobile space evolves at a rapid pace, and if you choose native development, then that should be your goal, to become very, very familiar with the platform you're targeting.

This is one of the reasons many developers choose a hybrid solution. If you are a seasoned web developer, then you will be up and running in no time. Apache Cordova, in combination with Ionic or Onsen UI, can speed up development significantly.

Web applications are certainly something to keep in mind. They are a different category, though. By creating a web application, you have no intention of having an application in any of the mobile stores. Many companies chose this path several years ago. Nowadays, if the budget and resources are available, native and hybrid approaches are more popular.


Ask Yourself

To decide what approach you'd like to take in developing your mobile application, ask yourself these questions. As I mentioned earlier, choosing the type of

Is performance critical?

If so, native is your best option. You may also want to look into Unreal Engine if you plan to develop a game. As opposed to Unity, they only charge you 5% of the revenue you make instead of a monthly fee.

When we discussed native applications, we saw that they're able to utilize the device capabilities, and a good example of this is how Apple takes graphics performance to the next level using Metal by giving developers almost full access to the GPU on the device.

If you aren't familiar with Metal, here's what Apple has to say about it:

Metal 2 provides near-direct access to the graphics processing unit (GPU), enabling you to maximize the graphics and compute potential of your apps on iOS, macOS, and tvOS.

Now, if performance doesn't matter, you're free to choose cross-platform or hybrid apps, whichever suit your needs better based on the following few questions. Anyway, though, if you have the resources to build native apps, that's always the best option overall.

Is cross-platform support important?

If not, you should consider the native route. Native apps, as mentioned above, are definitely the best way to go for a solid user experience and best performance possible. 

If cross-platform support is vital, and you cannot get native apps for all your platforms, then a hybrid native or hybrid web approach is your best bet. 

Take a look at Xamarin or React Native if performance and device capabilities are equally important. My personal favorite, though, is Ionic because it gives a good deal of on-device functionalities while still having a wide enough umbrella to cover most development

Is this your next big thing?

If your goal is to become a mobile developer, then my suggestion would be to choose a native approach. This is very personal, though. I am an iOS developer and I like it this way; I think Apple is disrupting the mobile development field, and it gives you a lot of tools as a developer. By focusing on one platform, in my case, iOS, tvOS, watchOS, and macOS, I have the time to become very familiar with the platform. This is an important aspect of mobile development if you want to create compelling applications and a great user experience.

If you have a background in web development, then native is still an option. However, if you want to get something out the door quickly, then a hybrid or web app is the fastest solution.


Conclusion

If you were expecting a clear-cut answer, then I may have disappointed you. If you haven't made up your mind yet, then I suggest giving some of the options I listed a try. Play around with Cordova or take a learn about Swift and see how you like it. Don't take the easiest or quickest path to your goal. Make sure you also enjoy the journey because that's where the fun is.

If you already know what approach is the best for you, then you may want to speed up the development of your next application with some of these iOS and Android templates. Check them out on Envato Market:

2018-04-11T19:51:16.497Z2018-04-11T19:51:16.497ZVardhan Agrawal

15 Best Android App Templates With Maps Integration

$
0
0

If you’re creating any sort of app that involves getting your users from one place to another then good map integration is a must. Here are the 15 best Android app templates with map integration to be found at CodeCanyon

App templates are a great solution for inexperienced coders who want to create apps but don’t have the skill to do so yet. This is because they already have core functions implemented, so you can customise the app easily and add the elements you think are most important to the app's code. This makes it quicker and easier to create the product you want. In addition, app templates are a great way to learn more about coding and perfect your skills.

Whether you’re interested in building a store, restaurant or city guide app or creating a booking app, take a look below to see some of the best templates with map integration currently available. 

1. City Guide

Create your own handy travel guide app for the city of your choice with the City Guide Android app template. The template, which was developed in Android Studio and styled with Material Design, doesn’t require programming skills to use, and with just one config file to set up, is easy to configure and customise. 

You can organise your chosen city highlights in categories like attractions, sports, hotels, nightlife, etc. These chosen highlights can also be viewed as clickable spots on an interactive map which uses geolocation to identify your phone’s current position and distance from each highlight.

City Guide

Other great features:

  • eight colour themes to choose from 
  • photos can be added to each city highlight
  • Google Analytics shows you how people are using your app
  • you can monetise your app with AdMob
  • and more

User michalis1984 says:

“Great start point to build your own app (if you know coding) or to use as it is to create a simple basic app with the provided functionality with limited or even no knowledge of coding. Great work! Coding structure is clear.”

2. Taxi Booking

Uber has completely revolutionised the concept of taxi services and spawned a slew of localised enterprises around the globe that use technology to reach potential clients needing taxi services. 

If you are looking to create an app with Google map integration, to go along with your Uber-type taxi service, check out the Taxi Booking app template. The app allows both passenger and driver login. 

Passengers are able to automatically assign their order to the nearest driver, calculate fare based on distance, cancel a booking, map the route directions from start to drop off, and more. Drivers, on the other hand, can change their availability mode to free or busy, receive instant notification when new requests come in from passengers, map the easiest and fastest route from pickup to drop off, mark a ride as complete, and more.

Taxi Booking

Other great features:

  • web admin panel
  • ability to customise both driver and passenger interface
  • ability to track drivers’ movements
  • track job details of every driver
  • and more

User vsihesller says:

“I am really satisfied with this application. The code is very clean, easy to read. Even though I am new to Android development, still I was able to customize it adding many features to my business specifically.  My business work with 3 different automobiles: motorbike, cars and vans. I could add different icons to each one of them. I added navigation to driver interface and many more. All of that with very basic knowledge of the Android's programming languages. The author was supportive as well. I have 70+ emails reply from the author on my gmail. For all of those reasons mentioned above, I am giving this 5 stars review.”

3. Wheres My Places

Wheres My Places is similar to City Guide above, but the template’s developers are pitching it more as an app for locals rather than visitors to find things like the nearest bank, cafe, hospital, store, etc. 

Having said that, there’s nothing stopping developers from adapting it in any way you want. One of the most useful aspects of this app template is that it integrates a great map tool which tells users their distance from their selected destination and estimates the time it will take them to get there. It also acts as a great navigation tool directing users through the streets, so there’s never a worry of getting lost in an unfamiliar location. 

Wheres My Place

Other great features:

  • each location comes with full details including address, phone number, map direction, images, user review, etc.
  • results can be sorted by distance or rating
  • add any location to a favourites list
  • supports Google Voice
  • and more

User Popzkg says:

“Great app and awesome customer support.”

4. AdForest 

AdForest is a classified ads app template that would interest developers with clients who need an app to manage product listings for their ad posting business. The template has a built-in text messaging system for easy communication between buyers and sellers. It comes with push notifications to alert users when there’s a message on an ad, and the Google Maps integration allows users to get directions to the seller.

AdForest

Other great features:

  • intelligent advance search
  • bidding on ads
  • social logins
  • translation ready
  • and more

User Bookflow says:

"Dev team is serious about creating the best one-stop platform for classified niche. I am using both theme and app. Lots of features to fulfil every requirement of classified niche. And their documentation videos are just what a newbie needs to setup well.”

5. Easy Real Estate App

The Easy Real Estate App template helps developers build their own mobile real estate application easily and quickly. The app allows the administrator to submit and edit properties and their description, while the end user of the app can search for the nearest properties to them or for a property in a specific location and then use the integrated Google maps to make their way to that property.

Easy Real Estate App

Other great features:

  • user guide videos
  • AdMob supported
  • translation ready
  • multiple currencies available
  • and more

6. Store Finder 

The Store Finder app template allows developers to create apps that help users find stores near them and let store owners add their store to the app’s listing. Apart from helping users to find their desired store, the app's integrated map also has a powerful zoom feature which allows users to view details of the store’s location and go to store details with one click.

Store Finder

Other great features:

  • store image gallery
  • shows reviews for each store where available
  • AdMob integrated
  • social media sharing possible
  • and more

User SmoothNerds says:

“They deliver excellent customer service. Great communication via Skype and fast customization. Will definitely keep an eye on their portfolio or ask them for future projects.”

7. Restaurant Finder

The highly rated Restaurant Finder app template does exactly what you’d expect based on its name—it helps developers create a database of restaurants which users of their app can then use to find a restaurant of their choice. Restaurants are organised into categories based on food type, and user ratings and reviews are included where available. Map integration provides directions from the user’s location to the restaurant.

Restaurant Finder

Other great features:

  • book a table via mail or SMS
  • ability to add to favourite
  • share via various social media
  • and more

User okunade55 says:

“Excellent app, good code, less bugs and an amazing support.”

8. Ultimate City Guide

Another choice of app template for developers looking to create a guide to key locations around a city of their choice, Ultimate City Guide can be adapted as a guide for tourists and locals alike. City locations can be viewed in a list view, by category or by their location on a map. When users select a map location icon, they are given directions to the restaurant they’ve selected. The template also gives app owners five monetisation methods, including native ads and referrals to booking websites. 

Ultimate City Guide

Other great features:

  • supports user login via Facebook or email 
  • ratings and reviews of places 
  • sorting by distance, name, or rating 
  • and more 

User candrareza says:

"Awesome work! Clean Code! Very quick support!”

9. NearbyStores

The NearbyStores app template is another option for developers looking to create apps that allow end users to locate businesses in their area and, thanks to the integrated map, to find the easiest and fastest route to them. 

The app is great for business owners as they can send push notifications to mobile clients to inform them about new offers and events near them. It is also a great tool for customers who not only enjoy access to information about an unlimited number of stores in their locale, but also can take advantage of available reviews, comments, and ratings on stores.

NearbyStores

Other great features:

  • Google Analytics
  • AdMob ready
  • real-time chat possible
  • supports multiple languages
  • and more 

User Autopop says:

“Best customer support! They help me to install and set it up. I've had a lot of emails, questions, they reply to each telling me exactly what I had to do!”

10. Universal

The Universal Android app template lets users create just about any app they want by pulling in unlimited content from blogs, timelines, feeds, channels, playlists, webpages, etc., and easily combining them in one customisable app. 

The app has tons of built-in features, including push notification for sending messages to users and the ability to integrate various social media accounts. It also uses maps to show a single location, a collection of locations, or map overlays in your app.

Universal

Other great features:

  • in-app video player and media player
  • save articles and posts from WordPress and RSS offline
  • AdMob advertising
  • and more

User valvze says:

“From the minute I bought this product, I have had nothing but fun. It was an absolute joy creating a personalized app from this template and would definitely recommend this to anybody trying to learn programming. The app is built with easy to figure code and the design elements are fantastic as well. I did have a few hurdles to cover at first but the person behind Sherdle was quick to respond to each of my queries and guide me through it all. Thanks for the amazing app, 10/10 would recommend.”

11. Clinic Booking App

The Clinic Booking App is a nifty app template targeting developers looking to create apps for clients who own clinics or service-oriented businesses. The app allows business owners to include a brief description of the clinic and its location, with an interactive map to get users from wherever they are to the clinic. It also gives critical information like doctor bios, services provided, and prices, and it allows users to easily select treatment services and book appointments. 

Clinic Booking App

Other great features:

  • simple user registration and login
  • ask doctor messaging system
  • multiple languages possible
  • and more

User afrojuju says:

“Excellent support and great app.”

12. TrackMe

The TrackMe app template is simple but powerful. It helps developers to create apps whose sole purpose is to track users' movements on a map through the location of their mobile phones. It is useful for businesses like taxi or bus companies that need to keep track of the location of their vehicles or for parents who want to keep an eye on their children’s whereabouts. The app can track users by both their online and offline status. 

TrackMe

Other great features:

  • list user mode
  • send various notifications
  • ability to start and stop tracking
  • and more

User aproduction31 says:

“Excellent customer support.”

13. Catch The Monsters

If you loved Pokémon and want to create a similar app, then this Catch The Monsters geolocation game template is for you. As the administrator of the app, your job is to place as many monsters as you like in various locations anywhere in the world you choose. 

Users of the app search for the monsters around their area. When they get close enough, they can trace the route to the monsters using the integrated map feature. When they find and catch the monsters, they earn points with the goal of getting onto the top 10 leaderboard. Users also have the option of sharing their stats on social networks and SMS.

Catch The Monsters

Other great features: 

  • AdMob banners
  • comprehensive user guide
  • push notifications
  • stats and caught monster list
  • and more

User milkywaylabs says:

“Perfect app and excellent support!”

14. Food Delivery System

The Food Delivery System app template is ideal for restaurants that want to create their own app for regular customers. It allows customers to book a table through the app, order a meal for delivery, call or email the restaurant from within the app, or plot a route to the restaurant from their current location using the integrated map.

 Food Delivery System

Other great features:

  • access to restaurant menu
  • push notification for order status
  • book table reminder
  • image gallery
  • and more

User mikele72 says:

“Great app and great support. I recommend this app!”

15. Explore

With so many people into travelling these days, any app made from the Explore template is bound to be a hit. That’s because this Google Maps app template is designed for users to search for places they want to visit in a given area and save each place on the same map. The map then displays all the places they plan to visit and provides the most logical route between them, complete with distances and journey time.

The template is designed with Google Material Design and features beautiful animation effects. 

Explore

Other great features:

  • easy to configure and customise
  • integrated with Firebase analytics 
  • banner ads and interstitials integrated
  • well documented
  • integrated with Firebase analytics to provide information about your user engagement
  • and more

User Valergiorgio says:

“Super, and good help when I couldn't figure something out.”

Conclusion

These top Android app templates with map integration are just a small selection of the Android app templates available at CodeCanyon, so if none of them quite fits your needs, there are plenty of other great options to choose from!

2018-04-16T10:00:00.000Z2018-04-16T10:00:00.000ZNona Blackman

15 Best Android App Templates With Maps Integration

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-30815

If you’re creating any sort of app that involves getting your users from one place to another then good map integration is a must. Here are the 15 best Android app templates with map integration to be found at CodeCanyon

App templates are a great solution for inexperienced coders who want to create apps but don’t have the skill to do so yet. This is because they already have core functions implemented, so you can customise the app easily and add the elements you think are most important to the app's code. This makes it quicker and easier to create the product you want. In addition, app templates are a great way to learn more about coding and perfect your skills.

Whether you’re interested in building a store, restaurant or city guide app or creating a booking app, take a look below to see some of the best templates with map integration currently available. 

1. City Guide

Create your own handy travel guide app for the city of your choice with the City Guide Android app template. The template, which was developed in Android Studio and styled with Material Design, doesn’t require programming skills to use, and with just one config file to set up, is easy to configure and customise. 

You can organise your chosen city highlights in categories like attractions, sports, hotels, nightlife, etc. These chosen highlights can also be viewed as clickable spots on an interactive map which uses geolocation to identify your phone’s current position and distance from each highlight.

City Guide

Other great features:

  • eight colour themes to choose from 
  • photos can be added to each city highlight
  • Google Analytics shows you how people are using your app
  • you can monetise your app with AdMob
  • and more

User michalis1984 says:

“Great start point to build your own app (if you know coding) or to use as it is to create a simple basic app with the provided functionality with limited or even no knowledge of coding. Great work! Coding structure is clear.”

2. Taxi Booking

Uber has completely revolutionised the concept of taxi services and spawned a slew of localised enterprises around the globe that use technology to reach potential clients needing taxi services. 

If you are looking to create an app with Google map integration, to go along with your Uber-type taxi service, check out the Taxi Booking app template. The app allows both passenger and driver login. 

Passengers are able to automatically assign their order to the nearest driver, calculate fare based on distance, cancel a booking, map the route directions from start to drop off, and more. Drivers, on the other hand, can change their availability mode to free or busy, receive instant notification when new requests come in from passengers, map the easiest and fastest route from pickup to drop off, mark a ride as complete, and more.

Taxi Booking

Other great features:

  • web admin panel
  • ability to customise both driver and passenger interface
  • ability to track drivers’ movements
  • track job details of every driver
  • and more

User vsihesller says:

“I am really satisfied with this application. The code is very clean, easy to read. Even though I am new to Android development, still I was able to customize it adding many features to my business specifically.  My business work with 3 different automobiles: motorbike, cars and vans. I could add different icons to each one of them. I added navigation to driver interface and many more. All of that with very basic knowledge of the Android's programming languages. The author was supportive as well. I have 70+ emails reply from the author on my gmail. For all of those reasons mentioned above, I am giving this 5 stars review.”

3. Wheres My Places

Wheres My Places is similar to City Guide above, but the template’s developers are pitching it more as an app for locals rather than visitors to find things like the nearest bank, cafe, hospital, store, etc. 

Having said that, there’s nothing stopping developers from adapting it in any way you want. One of the most useful aspects of this app template is that it integrates a great map tool which tells users their distance from their selected destination and estimates the time it will take them to get there. It also acts as a great navigation tool directing users through the streets, so there’s never a worry of getting lost in an unfamiliar location. 

Wheres My Place

Other great features:

  • each location comes with full details including address, phone number, map direction, images, user review, etc.
  • results can be sorted by distance or rating
  • add any location to a favourites list
  • supports Google Voice
  • and more

User Popzkg says:

“Great app and awesome customer support.”

4. AdForest 

AdForest is a classified ads app template that would interest developers with clients who need an app to manage product listings for their ad posting business. The template has a built-in text messaging system for easy communication between buyers and sellers. It comes with push notifications to alert users when there’s a message on an ad, and the Google Maps integration allows users to get directions to the seller.

AdForest

Other great features:

  • intelligent advance search
  • bidding on ads
  • social logins
  • translation ready
  • and more

User Bookflow says:

"Dev team is serious about creating the best one-stop platform for classified niche. I am using both theme and app. Lots of features to fulfil every requirement of classified niche. And their documentation videos are just what a newbie needs to setup well.”

5. Easy Real Estate App

The Easy Real Estate App template helps developers build their own mobile real estate application easily and quickly. The app allows the administrator to submit and edit properties and their description, while the end user of the app can search for the nearest properties to them or for a property in a specific location and then use the integrated Google maps to make their way to that property.

Easy Real Estate App

Other great features:

  • user guide videos
  • AdMob supported
  • translation ready
  • multiple currencies available
  • and more

6. Store Finder 

The Store Finder app template allows developers to create apps that help users find stores near them and let store owners add their store to the app’s listing. Apart from helping users to find their desired store, the app's integrated map also has a powerful zoom feature which allows users to view details of the store’s location and go to store details with one click.

Store Finder

Other great features:

  • store image gallery
  • shows reviews for each store where available
  • AdMob integrated
  • social media sharing possible
  • and more

User SmoothNerds says:

“They deliver excellent customer service. Great communication via Skype and fast customization. Will definitely keep an eye on their portfolio or ask them for future projects.”

7. Restaurant Finder

The highly rated Restaurant Finder app template does exactly what you’d expect based on its name—it helps developers create a database of restaurants which users of their app can then use to find a restaurant of their choice. Restaurants are organised into categories based on food type, and user ratings and reviews are included where available. Map integration provides directions from the user’s location to the restaurant.

Restaurant Finder

Other great features:

  • book a table via mail or SMS
  • ability to add to favourite
  • share via various social media
  • and more

User okunade55 says:

“Excellent app, good code, less bugs and an amazing support.”

8. Ultimate City Guide

Another choice of app template for developers looking to create a guide to key locations around a city of their choice, Ultimate City Guide can be adapted as a guide for tourists and locals alike. City locations can be viewed in a list view, by category or by their location on a map. When users select a map location icon, they are given directions to the restaurant they’ve selected. The template also gives app owners five monetisation methods, including native ads and referrals to booking websites. 

Ultimate City Guide

Other great features:

  • supports user login via Facebook or email 
  • ratings and reviews of places 
  • sorting by distance, name, or rating 
  • and more 

User candrareza says:

"Awesome work! Clean Code! Very quick support!”

9. NearbyStores

The NearbyStores app template is another option for developers looking to create apps that allow end users to locate businesses in their area and, thanks to the integrated map, to find the easiest and fastest route to them. 

The app is great for business owners as they can send push notifications to mobile clients to inform them about new offers and events near them. It is also a great tool for customers who not only enjoy access to information about an unlimited number of stores in their locale, but also can take advantage of available reviews, comments, and ratings on stores.

NearbyStores

Other great features:

  • Google Analytics
  • AdMob ready
  • real-time chat possible
  • supports multiple languages
  • and more 

User Autopop says:

“Best customer support! They help me to install and set it up. I've had a lot of emails, questions, they reply to each telling me exactly what I had to do!”

10. Universal

The Universal Android app template lets users create just about any app they want by pulling in unlimited content from blogs, timelines, feeds, channels, playlists, webpages, etc., and easily combining them in one customisable app. 

The app has tons of built-in features, including push notification for sending messages to users and the ability to integrate various social media accounts. It also uses maps to show a single location, a collection of locations, or map overlays in your app.

Universal

Other great features:

  • in-app video player and media player
  • save articles and posts from WordPress and RSS offline
  • AdMob advertising
  • and more

User valvze says:

“From the minute I bought this product, I have had nothing but fun. It was an absolute joy creating a personalized app from this template and would definitely recommend this to anybody trying to learn programming. The app is built with easy to figure code and the design elements are fantastic as well. I did have a few hurdles to cover at first but the person behind Sherdle was quick to respond to each of my queries and guide me through it all. Thanks for the amazing app, 10/10 would recommend.”

11. Clinic Booking App

The Clinic Booking App is a nifty app template targeting developers looking to create apps for clients who own clinics or service-oriented businesses. The app allows business owners to include a brief description of the clinic and its location, with an interactive map to get users from wherever they are to the clinic. It also gives critical information like doctor bios, services provided, and prices, and it allows users to easily select treatment services and book appointments. 

Clinic Booking App

Other great features:

  • simple user registration and login
  • ask doctor messaging system
  • multiple languages possible
  • and more

User afrojuju says:

“Excellent support and great app.”

12. TrackMe

The TrackMe app template is simple but powerful. It helps developers to create apps whose sole purpose is to track users' movements on a map through the location of their mobile phones. It is useful for businesses like taxi or bus companies that need to keep track of the location of their vehicles or for parents who want to keep an eye on their children’s whereabouts. The app can track users by both their online and offline status. 

TrackMe

Other great features:

  • list user mode
  • send various notifications
  • ability to start and stop tracking
  • and more

User aproduction31 says:

“Excellent customer support.”

13. Catch The Monsters

If you loved Pokémon and want to create a similar app, then this Catch The Monsters geolocation game template is for you. As the administrator of the app, your job is to place as many monsters as you like in various locations anywhere in the world you choose. 

Users of the app search for the monsters around their area. When they get close enough, they can trace the route to the monsters using the integrated map feature. When they find and catch the monsters, they earn points with the goal of getting onto the top 10 leaderboard. Users also have the option of sharing their stats on social networks and SMS.

Catch The Monsters

Other great features: 

  • AdMob banners
  • comprehensive user guide
  • push notifications
  • stats and caught monster list
  • and more

User milkywaylabs says:

“Perfect app and excellent support!”

14. Food Delivery System

The Food Delivery System app template is ideal for restaurants that want to create their own app for regular customers. It allows customers to book a table through the app, order a meal for delivery, call or email the restaurant from within the app, or plot a route to the restaurant from their current location using the integrated map.

 Food Delivery System

Other great features:

  • access to restaurant menu
  • push notification for order status
  • book table reminder
  • image gallery
  • and more

User mikele72 says:

“Great app and great support. I recommend this app!”

15. Explore

With so many people into travelling these days, any app made from the Explore template is bound to be a hit. That’s because this Google Maps app template is designed for users to search for places they want to visit in a given area and save each place on the same map. The map then displays all the places they plan to visit and provides the most logical route between them, complete with distances and journey time.

The template is designed with Google Material Design and features beautiful animation effects. 

Explore

Other great features:

  • easy to configure and customise
  • integrated with Firebase analytics 
  • banner ads and interstitials integrated
  • well documented
  • integrated with Firebase analytics to provide information about your user engagement
  • and more

User Valergiorgio says:

“Super, and good help when I couldn't figure something out.”

Conclusion

These top Android app templates with map integration are just a small selection of the Android app templates available at CodeCanyon, so if none of them quite fits your needs, there are plenty of other great options to choose from!

2018-04-16T10:00:00.000Z2018-04-16T10:00:00.000ZNona Blackman

New Course: Java 8 for Android App Development

$
0
0

The elegant Java 8 programming language has recently become much easier for Android app developers to use. In our new course, Upgrade to Java 8 for Android App Development, you'll learn how to use Java 8 and will discover the powerful benefits it offers when developing Android apps. 

What You’ll Learn

In this short course, Envato Tuts+ instructor Ashraff Hathibelagal shows you how to add Java 8 to your Android Studio projects and upgrade your code to leverage all the new features and APIs that come with it. 

Java 8 for Android development

With the help of some simple examples, you'll learn how to create and use lambdas, method references, repeating annotations, streams, and more new constructs in your Android apps.

The course consists of just nine bite-sized video lessons with a total viewing time of 37 minutes, so you can easily fit it in around your other commitments.

Watch the Introduction

 

Take the Course

You can take our new course straight away with a subscription to Envato Elements. For a single low monthly fee, you get access not only to this course, but also to our growing library of over 1,000 video courses and industry-leading eBooks on Envato Tuts+. 

Plus you now get unlimited downloads from the huge Envato Elements library of 500,000+ creative assets. Create with unique fonts, photos, graphics and templates, and deliver better projects faster.

2018-04-25T11:48:53.000Z2018-04-25T11:48:53.000ZAndrew Blackman

New Course: Java 8 for Android App Development

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-31029

The elegant Java 8 programming language has recently become much easier for Android app developers to use. In our new course, Upgrade to Java 8 for Android App Development, you'll learn how to use Java 8 and will discover the powerful benefits it offers when developing Android apps. 

What You’ll Learn

In this short course, Envato Tuts+ instructor Ashraff Hathibelagal shows you how to add Java 8 to your Android Studio projects and upgrade your code to leverage all the new features and APIs that come with it. 

Java 8 for Android development

With the help of some simple examples, you'll learn how to create and use lambdas, method references, repeating annotations, streams, and more new constructs in your Android apps.

The course consists of just nine bite-sized video lessons with a total viewing time of 37 minutes, so you can easily fit it in around your other commitments.

Watch the Introduction

 

Take the Course

You can take our new course straight away with a subscription to Envato Elements. For a single low monthly fee, you get access not only to this course, but also to our growing library of over 1,000 video courses and industry-leading eBooks on Envato Tuts+. 

Plus you now get unlimited downloads from the huge Envato Elements library of 500,000+ creative assets. Create with unique fonts, photos, graphics and templates, and deliver better projects faster.

2018-04-25T11:48:53.000Z2018-04-25T11:48:53.000ZAndrew Blackman

Get Wear OS and Android Talking: Exchanging Information via the Wearable Data Layer

$
0
0

When it comes to performing quick, simple tasks, wearable apps have the advantage, as a smartwatch that’s right there on your wrist is always going to be more accessible than a smartphone or tablet that’s floating around somewhere in your bag.

But there’s no such thing as the perfect gadget, and no-one’s raving about their smartwatch’s battery life or claiming that it’s every bit as quick and powerful as their smartphone or tablet.

To deliver the best possible user experience, you need to play to a device’s strengths. If you’re developing for Wear OS (the operating-system-formerly-known-as-Android-Wear), then you’re in a unique position to cherry-pick the best features from two very different devices.

Basically, you can have the best of both worlds!

In this article, I’ll show you how to make the most out of everything Android OS and Wear OS have to offer, by opening a channel of communication between the two. Once your handheld app and its wearable counterpart are chatting, you can delegate tasks based on the device it’s best suited for—whether that’s offloading battery-intensive tasks to the handheld, or making sure your app’s most important information is always easily accessible by displaying it on the user’s wrist.

By the end of this article, you’ll have created a handheld and a wearable application that can exchange information via the Wearable Data Layer and the MessageClient API.

What Is the Wearable Data Layer?

The Wearable Data Layer provides access to various client classes that you can use to store and retrieve data, without having to get your hands dirty with technical details such as data serialization. Once this information is on the Data Layer, it’s accessible to both the handheld and the wearable device.

In this article, we’ll be focusing on the MessageClient API, which is a one-way communication mechanism that you can use to send information to the Wearable Data Layer. This API is particularly handy for executing remote procedure calls (RPC), such as launching an Activity remotely on the paired handheld or wearable device.

Let’s look at an example: imagine you’ve created a navigation app. This app needs to a) retrieve location updates, and b) give the user directions.

Monitoring the device’s location is an intensive task that can quickly drain the limited battery available to your typical wearable. Using the MessageClient API, your wearable app can instruct its handheld counterpart to perform this work instead. Once the handheld has performed this heavy lifting, it can send the resulting information back to the wearable via the Data Layer, so your app gets the information it needs without taking a chunk out of the wearable’s remaining battery.

As a general rule, if your wearable app needs to perform a task that requires significant battery or processing power, or complex user interactions, then you should consider offloading this work to the corresponding handheld app. By contrast, if your app deals with particularly time-sensitive information, or content that the user is likely to access on the go, then you should display this information on the wearable app. 

In our navigation app example, pushing each set of directions from the handheld to the wearable makes this information more easily accessible, especially for someone who’s out and about, and hopelessly lost!

Out of the box, the MessageClient API is a one-way communication mechanism, but you can implement bidirectional messaging by creating a sender and a receiver in both your project’s handheld and wearable module—which is exactly what we’re going to do.

Creating a Wearable and a Handheld Module

In this article, we’re going to create a wearable app that’ll recognise when its handheld counterpart sends a new message to the Data Layer. This wearable app will then respond by retrieving this message and displaying it as part of its UI.

Then, we’ll rinse and repeat, creating a handheld app that monitors the Data Layer for messages sent from its wearable counterpart.

Information sent via the MessageClient API is only accessible to the application that created it. If the system is going to identify your wearable and handheld as belonging to the same application, then they’ll need to have the same package name, version code, and signing certificate. The easiest way to tick all these boxes is to create a project that consists of both a wearable and a handheld module:

  • Create a new project called DataLayer.
  • On the Target Android Device screen, select Phone and Tablet and Wear. Click Next.
  • For your phone and tablet module, select the Empty Activity template, and then click Next.
  • For your wearable module, select the Blank Wear Activity template, and then click Next, followed by Finish.

Creating Your Handheld App

Since we’re implementing bidirectional communication, both our handheld and our mobile modules need their own listener and sender. Let’s start by implementing this functionality in our handheld application.

I’m going to keep things simple and create a UI consisting of a TextView that’ll display the various messages retrieved from the Data Layer and a button that, when tapped, will send its own message to the Data Layer.  

Open your mobile module’s activity_main.xml file, and add the following:

Since we referenced a few dimens.xml values, we need to provide definitions for these values:

  • Control-click the mobile module’s res/values directory.
  • Select New > Values resource file.
  • Name this file dimens.xml and then click OK.
  • Add the following:

This gives us the following user interface:

Create the user interface for your projects handheld component

Add Your Dependencies

Open the mobile module’s build.gradle file and add the following dependencies:

Displaying and Sending Messages in MainActivity

In MainActivity, we need to perform the following:

  1. Keep the user in the loop!

When the user taps the Talk to the Wearable button, two things need to happen:

  • The handheld sends a message to the wearable. I’m going to use "I received a message from the handheld."
  • The handheld provides visual confirmation that the message has been sent successfully. I’m going to use "I sent a message to the wearable."

When the user taps the handheld’s Talk to the Wearable button, the handheld will attempt to send a message to the Data Layer. The system only considers this message successfully sent once it’s queued for delivery to a specific device, which means at least one paired device needs to be available.

In the best case scenario, the user taps Talk to the Wearable, the message gets queued for delivery, and our handheld triumphantly declares that: "I just sent a message to the wearable."

However, if no wearable devices are available then the message isn’t queued, and by default the user gets no confirmation that our app has even tried to send a message. This could lead the user to wonder whether the app is broken, so I’m also going to display a Sending message…. notification, regardless of whether the message is successfully queued or not.

When testing this app, you may also want to trigger multiple messages in quick succession. To make it clear when each message has been queued for delivery, I’m adding a counter to each message, so our handheld will display I just sent a message to the wearable 2, I just sent a message to the wearable 3, and so on. On the other side of the connection, our wearable will display I just received a message from the handheld 2, I just received a message from the handheld 3, and so on.

2. Display received messages

In the next section, we’ll be creating a MessageService that monitors the Data Layer and retrieves messages. Since our service will be performing its work on a different thread, it’ll broadcast this information to our MainActivity, which will then be responsible for updating the UI.

3. Define the path

Every message you transmit via the MessageClient API must contain a path, which is a string that uniquely identifies the message and allows your app to access it from the other side of the connection.

This path always starts with a forward slash (I’m using /my_path) and can also contain an optional payload, in the form of a byte array.

4. Check your nodes!

In Google Play services 7.3.0 and higher, you can connect multiple wearables to a single handheld device—for example, a user might splash out on multiple wearables that they switch between or use simultaneously. A Wear OS device may also be connected to multiple handheld devices during its lifetime, for example if the user owns an Android smartphone and a tablet, or they replace their old smartphone with a shiny new one. Note that any device that's capable of connecting to the Data Layer is referred to as a node in the application code.

In this article, I’m going to assume there will only ever be a single available wearable. Alternatively, you can get selective about which devices you send messages to, using GetConnectedNodes or getLocalNode.

Let’s implement all this in our MainActivity:

Create a Listening Service

At this point, our handheld is capable of pushing messages to the Data Layer, but since we want to implement bidirectional communication, it also needs to listen for messages arriving on the Data Layer.

In this section, we’re going to create a service that performs the following:

  1. Monitor the Data Layer for events

You can monitor the Data Layer either by implementing the DataClient.OnDataChangedListener interface or by extending WearableListenerService. I’m opting for the latter, as there are a few benefits to extending WearableListenerService. Firstly, WearableListenerService does its work on a background thread, so you don’t have to worry about blocking the main UI thread. Secondly, the system manages the WearableListenerService lifecycle to ensure it doesn’t consume unnecessary resources, binding and unbinding the service as required.

The drawback is that WearableListenerService will listen for events even when your application isn’t running, and it will launch your app if it detects a relevant event. If your app only needs to respond to events when it’s already running, then WearableListenerService can drain the device’s battery unnecessarily.

2. Override the relevant data callbacks

WearableListenerService can listen for a range of Data Layer events, so you’ll need to override the data event callback methods for the events you’re interested in handling. In our service, I’m implementing onMessageReceived, which will be triggered when a message is sent from the remote node.

3. Check the path

Every time a message is sent to the Data Layer, our app needs to check whether it has the correct my_path identifier.

4. Broadcast messages to MainActivity

Since WearableListenerService runs on a different thread, it can’t update the UI directly. To display a message in our application, we need to forward it to MainActivity, using a LocalBroadcastManager.

To create the service:

  • Make sure you have the mobile module selected.
  • Select New > Service from the Android Studio toolbar.
  • Name this service MessageService.
  • Add the following:

Finally, open the Manifest and add some information to the MessageService entry:

As already mentioned, the system only considers a message successfully sent once it’s queued for delivery, which can only occur if one or more wearable devices are available.

You can see this in action by installing the mobile module on a compatible smartphone or tablet, or an Android Virtual Device (AVD). Click the Talk to the Wearable button and the app will display the Sending message… text only. The I just sent the wearable… text won't make an appearance. 

If our message is ever going to be queued for delivery, then we need to implement another set of sender and receiver components in our project’s wearable module.

Creating Your Wearable App

Our wearable app is going to have similar functionality as its handheld counterpart, so I’ll be skipping over all the code that we’ve already covered.

Once again, let’s start by creating the app’s user interface. Open the wear module’s activity_main.xml file and add the following:

At this point, your user interface should look something like this:

Create the UI for your Android projects Wear OS module

Open your build.gradle and add the following dependencies:

Now, we need to send our message to the Data Layer:

Next, we need to create a listener that’ll monitor the Data Layer for incoming messages and notify MainActivity whenever a new message is received:

  • Make sure the wear module is selected.
  • Choose New > Service from the Android Studio toolbar.
  • Name this service MessageService and then add the following:

Open the module’s Manifest, and create an intent filter for the WearableListenerService:

You can download the complete project from GitHub.

Testing Your App

At this point you have two apps that can exchange messages over the Data Layer, but if you're going to put these communication skills to the test, you’ll need to install your project on a handheld and a wearable device.

If you’re an Android developer, then chances are you have at least one Android smartphone or tablet laying around, but wearables still feel like a relatively new and niche product, so you might not have invested in a smartwatch just yet.

If you do decide to pursue Wear OS development, then you should take the plunge and purchase a smartwatch at some point, as there’s no substitute for testing your app on a real Android device. However, if you’re just experimenting with Wear OS, then you can create an AVD that emulates a wearable, in exactly the same way you create an AVD that emulates a smartphone or tablet. You can then get your AVD and your physical Android device talking, using port forwarding.

The first step is to create a wearable AVD and install your wear module on this emulated device:

  • Select Tools > Android > AVD Manager from the Android Studio toolbar.
  • Click Create Virtual Device…
  • Select Wear from the left-hand menu.
  • Choose the wearable that you want to emulate, and then click Next.
  • Select your system image, and then click Next.
  • Give your AVD a name, and then click Finish.
  • Select Run > Run… from the Android Studio toolbar.
  • In the little popup that appears, select Wear…
  • Select the wearable AVD that you just created. After a few moments, the AVD will launch with your wearable component already installed.

Next, install the handheld module on your smartphone or tablet:

  • Connect your physical Android device to your development machine.
  • Select Run > Run… from the Android Studio toolbar.
  • Choose mobile when prompted.

Finally, we need to get our physical Android device and our AVD talking:

  • Make sure Bluetooth is enabled on your handheld (Settings > Bluetooth) and that it’s connected to your development machine via USB cable.
  • On your handheld device, open the Play Store and download the Wear OS by Google app (formerly Android Wear).
  • Launch the Wear OS application.
  • On your emulated wearable, click the Home button in the accompanying strip of buttons (where the cursor is positioned in the following screenshot) and then open the Settings app.

Testing your project by connecting your emulator and your Android smartphone or tablet

  • Select System > About and click the Build number repeatedly, until you see a You are now a developer message.
  • Return to the main Settings menu by clicking the Back button twice. You should notice a new Developer Options item; give it a click.
  • Select ADB Debugging.
  • On your development machine, open a new Command Prompt (Windows) or Terminal (Mac) and then change directory (cd) so it’s pointing at the Android SDK’s platform-tools folder. For example, my command looks like this:
  • Make sure ADB (Android Debug Bridge) is recognizing both the emulator and your attached smartphone or tablet, by running the /.adb devices command. It should return the codes for two separate devices.
  • Forward your AVD’s communication port to the attached smartphone or tablet, by running the following command in the Terminal/Command Prompt window:
  • On your handheld, launch the Wear OS app. Navigate through any introductory dialogues, until you reach the main Wear OS screen.
  • Open the dropdown in the upper-left corner and select Add a new watch.
  • Tap the dotted icon in the upper-right corner, and select Pair with emulator. After a few moments, the handheld should connect to your emulator.
Use the Wear OS app to pair your emulator with your Android smartphone or tablet

You’re now ready to test your app! Launch the Wear component on your emulator and the mobile component on your handheld, and experiment by tapping the different Talk... buttons.

When you tap Talk to the wearable on the handheld, the following messages should appear:

  • Handheld:“I just sent the handheld a message." 
  • Wearable: “I just received a message from the handheld.”
You can now exchange messages over the Data Layer using the MessageClient API

When you tap Talk to the handheld on the wearable, the following messages should appear:

  • Wearable:“I just sent the handheld a message."
  • Handheld:‘I just received a message from the wearable.”

Conclusion

In this article, we looked at how to exchange messages between your handheld and your wearable app, over the Wearable Data Layer. 

In production, you’d probably use this technique to do something more interesting than simply exchanging the same few lines of text! For example, if you developed an app that plays music on the user’s smartphone, you could give them the ability to play, pause, and skip songs directly from their wearable, by sending these instructions from the wearable to the handheld, over the Data Layer.

You can learn more about the Wearable Data Layer, including how to sync more complex data, over at the official Android docs.

2018-04-27T12:00:00.000Z2018-04-27T12:00:00.000ZJessica Thornsby
Viewing all 1836 articles
Browse latest View live