Quantcast
Channel: Envato Tuts+ Code - Mobile Development
Viewing all 1836 articles
Browse latest View live

Sympli for Developers

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-27143

Introduction

If you haven't heard of it before, Sympli is a tool designed to simplify the process of taking an interface designed in Photoshop or Sketch and implementing it for the web or as a functional iOS or Android application. The workflow works like this: first, a designer creates a project for web, iOS or Android which can contain any number of designs. These designs represent the different screens that should be available in the application which you are developing. Next, the developer can use these designs to easily create an interface for a website or new app.

In this article, I will show you some of the many features that Sympli offers for developers to create apps for iOS or Android easily, building on the work done by designers.

If you want to see what Sympli has to offer for designers, then check out this article by Kezz Bracey: 

1. IDE Plugins

Using Sympli as a developer begins with downloading and installing a plugin for either Android Studio or Xcode. Installing these plugins is very easy, and the video tutorials shown on the linked download pages will help you out if you have any problems.

2. Inspecting Design Mockups

Sympli plugin for Android and Xcode provides access to interactive design specifications (some teams use the term "red lines documents"). Open a mockup and click on the design elements to get all the necessary information required to implement the design in your app.

As shown in the following screenshot, Sympli gives you all the information you could possibly need about any particular view so you can implement a pixel-perfect design manually in code or in Interface Builder or Layout Editor.

Sympli View Properties

Please note that Sympli automatically converts pixels in design mockups to points, as well as other parameters like fills, shadows, and borders to Android or iOS-specific terms and units.

Also, in case the mockup was created in Sketch, Sympli plugin will display the resizing rules applied to widgets in Sketch, which helps developers set proper constraint values.

Drag & Drop Views

One of Sympli's main features for developers is the ability to just drag and drop views from a design into an Android XML or iOS Storyboard file. Sympli takes care of a lot of the hassle when creating interfaces by positioning and sizing your views exactly as they appear in the original design. In addition to this, Sympli can also configure many other attributes such as background colour and custom fonts for text views.

To apply styling to the existing view in Interface Builder in Xcode, press the "Shift" button than drag and drop the design element into view.

To generate a styling code for the views created programmatically, do a drag and drop into your controller’s code with the right mouse button pressed.

From here, all you have to do is modify the constraints of your views so that they can adapt how you would expect on devices with different screen sizes. For iOS this would mean adding Auto-layout constraints, and for Android it would mean configuring the views in the right sort of layout for your design. 

Building Custom Views With Sympli

In addition to the generation of styling code for standard views, Sympli’s plugin for Xcode helps developers build custom controls based on the vector data from design mockup.

Select a vector shape on the mockup and press the "Snippet" button next to a layer’s name in the details panel. This will bring up a popup window with Swift code that draws in the same way it was designed programmatically. There is also a handy option to copy a Xcode Playground-ready code to continue building a custom view with a live preview of playground right away.

Sympli Generated Core Graphics Code

This is extremely useful for any applications which require some manual drawing of views on the screen.

3. Assets Import

As long as everything has been configured and uploaded correctly by the designer, Sympli can take care of importing images and custom fonts used in the design. Upon import, Sympli prompts the designer to name the image or font according to the platform best practices. For example, if an image called Image 1 is being uploaded to an Android project, Sympli will prompt the designer to rename it image_1. This ensures that you don't have to waste development time renaming files so that they can be loaded easily. In addition, developers can create renaming rules so that they will be applied every time the mockup is updated.

In both the Xcode and Android Studio plugins, clicking on the button shown below when viewing the images or fonts in a design will import them into your project. 

Asset Import Button

Sympli is very intelligent about importing assets. It will put images into your asset catalogs on iOS and in your project's resources folder on Android; it will even create scaled versions for different devices automatically.

Sympli Xcode Image Import

Note: The Sympli has announced that they will soon add an option for both the Android Studio and Xcode plugins to export vector assets (PDF for iOS and VectorDrawable) from any vector layer in the mockup.

4. Automatic Syncing of Design Mockups

By default, Sympli enables automatic syncing for your project's design in both the Xcode and Android Studio plugins. This means that, even as you are working, if the designer makes some changes and uploads them to Sympli, the new design will immediately be available in Xcode and Android Studio. 

When changes are made to a design, Sympli will automatically download the latest version of the design and notify you of the update. This ensures that you never have to manually check that you're working with the latest designs and also eliminates the need for the designer to notify you when they've made changes.

5. Design Versions

In addition to just automatically downloading the most recent copies of designs for your project, Sympli also makes it very easy to view previous versions of any design. Both in the IDE plugins and in Sympli's web app, you can very easily go and look at previous versions of any design. 

This can be particularly useful if you aren't sure what changes have been made in the latest version of a specific design. In Sympli's web app, you can easily flick between different versions of the same design to see what changes have been made:

Sympli Design Version Comparison

Lastly, this backlog of previous versions can also be very useful if a revision of your app requires an older design to be used. Avoiding the hassle of trying to find an old file in your downloads in an email, with Sympli you can just select a version from a simple drop-down list in the IDE plugin:

Sympli Plugin Version Picker

Sympli Webapp provides a change browser where you can visually compare any two versions of the design mockup and see the changes side-by-side. This makes additions, deletions, and other updates immediately obvious to the eye, increasing the teams’ productivity as a result. Not only that, developers can also see the changes on a property level—for example if a color changes slightly or a border becomes 1px thicker.

For any mockup uploaded to Sympli more than once, there will be a "Browse Changes" button in a top bar that opens a side-by-side change browser. Select mockup versions you want to compare and click on the highlighted regions to see the actual changes.

6. Project Summary

Both the Sympli web app and IDE plugins can show you a Summary for any project. This summary screen shows you all the colours and fonts used throughout the entire project. This can be very useful if you need to get the details of a specific colour or font and aren't entirely sure of which design that resource is used in. It can also be used as an always-available reference when developing your app if you need to use an exact colour or font someplace where a design hasn't been provided to you. 

Sympli Summary Screen

Conclusion

As you can see, Sympli makes it much easier to develop an app from interface designs created by someone else. Sympli takes care of a lot of the manual work involved in converting PSDs or Sketch files into a Storyboard for iOS or an XML for Android. This time saved because of Sympli can be used more productively—for example, you can spend your time on actual functionality rather than tediously copying colour codes and images!

If you want to find out more about Sympli or would like to try it yourself, then head over to their website and check out some of their great video tutorials on how to install and use the Xcode and Android Studio plugins.

As always, please be sure to leave your comments and feedback in the comments below.

2016-09-28T11:59:02.000Z2016-09-28T11:59:02.000ZDavis Allie

Create a Pokémon GO Style Augmented Reality Game With Vuforia

$
0
0
What You'll Be Creating

1. Introduction

In the first post of this series we talked about how awesome Vuforia is for creating Augmented Reality experiences, and now we're ready to practice these concepts in an actual app. In this tutorial, we'll start to play around with Augmented Reality using Vuforia onUnity 3D. We’ll learn how to set up Vuforia and start developing an AR game from scratch, adopting a logic similar to the one used on Pokémon GO!

It won’t be necessary to have any previous experience on Unity or Vuforia to follow this tutorial.

1.1. Quick Recap: How Does Vuforia Work?

Vuforia uses the device's camera feed combined with accelerometer and gyroscope data to examine the world. Vuforia uses computer vision to understand what it 'sees' on the camera to create a model of the environment. After processing the data, the system can roughly locate itself in the world, knowing its coordinates: where is up, down, left, right, and so on.

If you don’t know what Vuforia is about, take a look at the first post in this series.

1.2. What Will We Learn?

This tutorial is divided into two parts. In this one, we'll see some of the particularities of Vuforia on Unity 3D, we'll learn how to set up the environment, and we'll also start developing a small AR game called Shoot the Cubes. We'll pay special attention to the ARCamera Prefab, one of the most important parts of Vuforia in Unity.

In the second part, we'll continue to develop the Shoot the Cubes game, adding interactivity and making it more interesting. This section won't go too much into Vuforia's particularities, as the idea will be to explore some possibilities offered by Unity to create an engaging Augmented Reality experience.

2. Vuforia on Unity

Unity is a popular and powerful game engine that is easy to use and can compile games for multiple platforms. There are some advantages in using Unity to create AR experiences with Vuforia. It's possible to target all Vuforia’s supported systems, including the smart glasses. It's simpler to use, thanks to the Prefabs given by Vuforia’s SDK. Using only Unity is it possible to access all the features available on Vuforia.

2.1. Vuforia Prefabs

You can access all Vuforia's features on Unity using the Vuforia prefabs. All that you have to do is drag the object to the stage and configure it. As the name suggests, prefabs are like templates for creating and cloning Unity objects complete with components and properties. For example, the ImageTarget represents images that can be used as targets. Let's take a look at the Vuforia prefabs available on Unity:

  • ARCamera: The most important prefab. It manages the overall AR experience, controlling the render quality, defining the center of the world, the device camera to be used, the maximum targets to be tracked, and so on. In this tutorial we'll concentrate our efforts on understanding how to use this object.
  • Targets: All Vuforia targets have their own prefab: ImageTarget, MultiTarget, CylinderTarget, ObjectTarget, UserDefinedTargetBuilder, VuMark, FrameMarker. Those targets will be recognized by the ARCamera and start an action, like exhibiting a 3D object or animation.
  • CloudRecognition: Used to access targets defined in the Vuforia cloud system.
  • SmartTerrain and Prop: Those objects are used in the Smart Terrain feature.
  • TextRecognition and Word: Prefabs used in the Text Recognition feature.
  • VirtualButton: Vuforia can understand Targets as buttons that can be physically pressed by the user. This prefab will help you to use this resource.

3. Creating Our First AR Experience

The game that we'll develop is simple, but it illustrates the Augmented Reality principles well, and it will teach us some of Vuforia’s fundamentals. The game’s objective is to find and shoot cubes that are flying around the room. The player will search around for the cubes using his or her device and ‘tap’ to shoot on the boxes. We won’t concern ourselves with score, level or anything like that, but you can easily expand on these aspects of the game yourself.

3.1. Preparing Unity for Vuforia

Before we start playing around, we’ll need to prepare Unity for Vuforia. The process is quite simple, and we basically need to import Vuforia's SDK package and add an ARCamera prefab to our project.

  • Create a developer account on Vuforia.
  • Make the login and download theVuforia SDK for Unity.
  • Open Unity and create a new project called "Shoot the Cubes".
  • After the Unity project window opens, go to Assets > Import Package > Custom Package and select the downloaded SDK.
  • Import everything.
  • Delete the Camera object in the Hierarchy window.
  • Go to License Manager on Vuforia’s developer portal and create a new license using your developer account.
  • Copy the license key.
  • Back to Unity, in the Project window, go to Assets > Vuforia > Prefabs > ARCamera. Select the element and drag it to the Hierarchy window.
ARCamera Prefab
  • WithARCamera selected, in the Inspector panel, go to Vuforia Behavior (Script), find the field App license key, and paste the license you created in Vuforia's developer portal.
Paste the License Key on ARCamera prefab
  • Click the Apply button near the top of the Inspector pane to add the license key to all ARCamera prefabs on this project.
Apply the changes on the ARCamera prefab

3.2. Testing if Vuforia Is Working

It's time to check if the environment is working correctly. 

Using Your Computer Camera

If you have a webcam on your computer, you can press Unity's play button to check if the ARCamera is working. It will be possible to recognize targets using the webcam; however, it won't be possible to use any sensor data to test your AR experience. If the camera feed doesn't show the Game window, there is a possibility that your camera isn't compatible with the webcam profile provided by ARCamera

Press the PLAY button on Unity

Configuring the Application to Run on a Device

The best way to test your Vuforia application is directly on the device. We'll compile the project for Android, but the same steps would apply to iOS devices.

  • First, we need to save the Scene that we're working on. Go to File > Save Scene.
  • Select the Assets folder and create a new folder called Scenes.
  • Save this scene as ShootTheCubesMain.
  • Go to File > Build Settings.
  • Select Android and click on Switch Platform. If this option is disabled, you'll have to download the desired Unity SDK for the device.
Unity Build Settings
  • Click on Player Settings and configure the project in the Inspector window.
Unity Player Settings
  • Pay attention to some options: Turn off the Auto Graphics API and make sure that OpenGLES2 is selected for the Graphics API option.
  • Type the Bundle Identifier.
  • For Android devices, make sure that the Minimum API Level selected is API 9 or greater. You'll also need to use ARMv7 for the Device Filter option.
  • If you followed the steps correctly, the project is ready to be built. However, if this is the first time that you're compiling a Unity project for Android or iOS, you have to configure Unity for those devices. Follow this guide for Android and this for iOS.
  • To run the project, go back to Build Settings and click on Build and Run.

After the building, the application will be installed on your device. For now, all that you should expect is to see the camera feed on your device without any error. If you've got that, everything worked properly.

3.3. Using the ARCamera Prefab

The objective of the Shoot the Cubes game is to search out and shoot flying cubes using the device's camera and sensors. This approach is similar to the one used on Pokémon GO. To accomplish this, we'll only need to use the Vuforia ARCamera prefab.

There are lots of scripts attached to the ARCamera. For now, the only one that you'll need to understand is the Vuforia Behavior script. Let's take a look at its options:

  • App License Key: Where the Vuforia license key should be inserted.
  • Camera Device Mode: Controls the render quality of the objects.
  • Max Simultaneous Tracked Images: Defines the maximum targets tracked at the same time. Vuforia doesn’t recommend more than five at once.
  • Max Simultaneous Tracked Objects: Defines the maximum objects tracked at the same time. Again, Vuforia doesn’t recommend more than five at the same time.
  • Load Object Targets on Detection: Loads the object associated with the target as soon as the target is detected.
  • Camera Direction: Chose which device camera to use.
  • Mirror Video Background: Defines if the camera feed should be mirrored.
  • Word Center Mode: The most relevant option for our project. It defines how the system should locate the center of the world. 
    • SPECIFIC_TARGET: Uses a specific target as a reference to the world.
    • FIRST_TARGET: The first target detected will be used as a reference to the world.
    • CAMERA: Uses the camera as a reference point to the world.
    • DEVICE_TRACKING: Uses the device’s sensor as a reference to set the world’s positions. This is the option that we need to choose for our little project.

For now, all that you'll need to change in the ARCamera is the Word Center Mode. Click on the ARCamera element in the hierarchy and in the Inspector pane, change the World Center Mode to DEVICE_TRACKING.

3.4. Using the Device's Sensor to Find the Center of the World

Let's add a cube to the stage and test if the ARCamera is working correctly.

  • Make sure that ARCamera's position and rotation are set to 0 on the X, Y, and Z axes.
ARCamera Transform Options
  • Create a Cube object from Game Object > 3D Object > Cube.
Create a Cube Object
  • Move the cube Position on the Z axis to 10 and 0 on the X and Y.
  • Scale the object to 2 on the XY, and Z axis.
  • Rotate the cube 45 degrees on the X and Y axis.
Change the Cube Position Rotation and Scale
  • You can press the play button to check if the cube is positioned correctly.
  • Once you're certain that the cube is positioned correctly, build the project again and test it on the device. To build, go to File > Build and Run.

You'll have to look around by rotating your device to find the cube. You'll notice that the object remains still in the same place, even after you rotate the device away from it. It's as if the cube 'exists' in the real world, but can only be seen with the device camera.

The cube remain in place even after the device rotates

3.5. Setting the Elements' Position According to ARCamera

The first problem with our application so far is that the cube may appear anywhere and the user will have to look around to find it. Since the center of the world is defined according to the device's sensors, we cannot be sure of the actual position of the elements. This is because the user might start off with the device in any orientation, and because the way rotation is measured varies from device to device.

In order to make sure that the AR entities start off in view of the user, the easiest approach is to wait for Vuforia to define the center of the world and to find the ARCamera rotation and then to arrange the starting location of elements according to that orientation.

We'll create a Spawn Manager to define the position of the cubes to be spawned. The manager will define its position according to the ARCamera rotation. It will wait until the rotation is set, and then move 10 units to the front of the camera.

  • Create two empty objects with Game Object > Create Empty. Right click on one of the objects you just created and rename it to _SpawnController.
  • Change the name of the other empty object to _GameManager.
  • In the Project window, select the Assets folder and create a new folder called Scripts.
  • In the Scripts folder, create a C# script called SpawnScript.
  • Drag the SpawnScript to the _SpawnController.
  • Double click on SpawnScript to edit it.

First let's add the Vuforia package.

To access ARCamera, use Camera.main. Let's create a function to get the camera position and set the cube to be spawned 10 units forward from this point.

We'll change the position only once from the Start function. ChangePosition is a coroutine that will wait a small amount of time before setting the position.

Let's test the script:

  • Back in Unity, click on the _SpawnController object and use Game Object > 3D Object > Sphere to insert a sphere inside _SpawnController
  • Select the sphere and make sure its position is set to 0 on the X,Y, and Z axis. 
  • Now we'll overlap the cube and _SpawnController so you can notice the importance of the script. Select _SpawnController and set its position to 0 on the X and Y axis and to 10 on the Z axis, the same position as the cube. 

The elements start out overlapping; however, once you build and run the application on a device, you'll see that the _SpawnController and its sphere will appear in front of the camera, and the cube will be in another place. Go ahead and test it! Make sure you're looking at the device right when the app starts.

4. Conclusion

Congratulations, you've created your first Augmented Reality experience. Yes, it's a little rough, but it is working! In this tutorial you've learned how to use Vuforia's main prefab in Unity, the ARCamera. You also learned how to configure it and how to use the device sensors to create the illusion that a virtual object is inserted into the world.

4.1. What's Next?

In the next tutorial we'll improve this principle to create a real game and a more engaging experience. We'll continue to develop the Shoot the Cubes game, adding some interactivity and exploring Unity's possibilities for creating an interesting AR Game. We'll make the cubes spawn and fly around, and we'll let the player search and destroy them by shooting a laser out of the device.

See you soon!

Special thanks for the image vector designed by Freepik, licensed under Creative Commons CC BY-SA.

2016-09-28T12:59:40.000Z2016-09-28T12:59:40.000ZTin Megali

Create a Pokémon GO Style Augmented Reality Game With Vuforia

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-27210
What You'll Be Creating

1. Introduction

In the first post of this series we talked about how awesome Vuforia is for creating Augmented Reality experiences, and now we're ready to practice these concepts in an actual app. In this tutorial, we'll start to play around with Augmented Reality using Vuforia onUnity 3D. We’ll learn how to set up Vuforia and start developing an AR game from scratch, adopting a logic similar to the one used on Pokémon GO!

It won’t be necessary to have any previous experience on Unity or Vuforia to follow this tutorial.

1.1. Quick Recap: How Does Vuforia Work?

Vuforia uses the device's camera feed combined with accelerometer and gyroscope data to examine the world. Vuforia uses computer vision to understand what it 'sees' on the camera to create a model of the environment. After processing the data, the system can roughly locate itself in the world, knowing its coordinates: where is up, down, left, right, and so on.

If you don’t know what Vuforia is about, take a look at the first post in this series.

1.2. What Will We Learn?

This tutorial is divided into two parts. In this one, we'll see some of the particularities of Vuforia on Unity 3D, we'll learn how to set up the environment, and we'll also start developing a small AR game called Shoot the Cubes. We'll pay special attention to the ARCamera Prefab, one of the most important parts of Vuforia in Unity.

In the second part, we'll continue to develop the Shoot the Cubes game, adding interactivity and making it more interesting. This section won't go too much into Vuforia's particularities, as the idea will be to explore some possibilities offered by Unity to create an engaging Augmented Reality experience.

2. Vuforia on Unity

Unity is a popular and powerful game engine that is easy to use and can compile games for multiple platforms. There are some advantages in using Unity to create AR experiences with Vuforia. It's possible to target all Vuforia’s supported systems, including the smart glasses. It's simpler to use, thanks to the Prefabs given by Vuforia’s SDK. Using only Unity is it possible to access all the features available on Vuforia.

2.1. Vuforia Prefabs

You can access all Vuforia's features on Unity using the Vuforia prefabs. All that you have to do is drag the object to the stage and configure it. As the name suggests, prefabs are like templates for creating and cloning Unity objects complete with components and properties. For example, the ImageTarget represents images that can be used as targets. Let's take a look at the Vuforia prefabs available on Unity:

  • ARCamera: The most important prefab. It manages the overall AR experience, controlling the render quality, defining the center of the world, the device camera to be used, the maximum targets to be tracked, and so on. In this tutorial we'll concentrate our efforts on understanding how to use this object.
  • Targets: All Vuforia targets have their own prefab: ImageTarget, MultiTarget, CylinderTarget, ObjectTarget, UserDefinedTargetBuilder, VuMark, FrameMarker. Those targets will be recognized by the ARCamera and start an action, like exhibiting a 3D object or animation.
  • CloudRecognition: Used to access targets defined in the Vuforia cloud system.
  • SmartTerrain and Prop: Those objects are used in the Smart Terrain feature.
  • TextRecognition and Word: Prefabs used in the Text Recognition feature.
  • VirtualButton: Vuforia can understand Targets as buttons that can be physically pressed by the user. This prefab will help you to use this resource.

3. Creating Our First AR Experience

The game that we'll develop is simple, but it illustrates the Augmented Reality principles well, and it will teach us some of Vuforia’s fundamentals. The game’s objective is to find and shoot cubes that are flying around the room. The player will search around for the cubes using his or her device and ‘tap’ to shoot on the boxes. We won’t concern ourselves with score, level or anything like that, but you can easily expand on these aspects of the game yourself.

3.1. Preparing Unity for Vuforia

Before we start playing around, we’ll need to prepare Unity for Vuforia. The process is quite simple, and we basically need to import Vuforia's SDK package and add an ARCamera prefab to our project.

  • Create a developer account on Vuforia.
  • Make the login and download theVuforia SDK for Unity.
  • Open Unity and create a new project called "Shoot the Cubes".
  • After the Unity project window opens, go to Assets > Import Package > Custom Package and select the downloaded SDK.
  • Import everything.
  • Delete the Camera object in the Hierarchy window.
  • Go to License Manager on Vuforia’s developer portal and create a new license using your developer account.
  • Copy the license key.
  • Back to Unity, in the Project window, go to Assets > Vuforia > Prefabs > ARCamera. Select the element and drag it to the Hierarchy window.
ARCamera Prefab
  • WithARCamera selected, in the Inspector panel, go to Vuforia Behavior (Script), find the field App license key, and paste the license you created in Vuforia's developer portal.
Paste the License Key on ARCamera prefab
  • Click the Apply button near the top of the Inspector pane to add the license key to all ARCamera prefabs on this project.
Apply the changes on the ARCamera prefab

3.2. Testing if Vuforia Is Working

It's time to check if the environment is working correctly. 

Using Your Computer Camera

If you have a webcam on your computer, you can press Unity's play button to check if the ARCamera is working. It will be possible to recognize targets using the webcam; however, it won't be possible to use any sensor data to test your AR experience. If the camera feed doesn't show the Game window, there is a possibility that your camera isn't compatible with the webcam profile provided by ARCamera

Press the PLAY button on Unity

Configuring the Application to Run on a Device

The best way to test your Vuforia application is directly on the device. We'll compile the project for Android, but the same steps would apply to iOS devices.

  • First, we need to save the Scene that we're working on. Go to File > Save Scene.
  • Select the Assets folder and create a new folder called Scenes.
  • Save this scene as ShootTheCubesMain.
  • Go to File > Build Settings.
  • Select Android and click on Switch Platform. If this option is disabled, you'll have to download the desired Unity SDK for the device.
Unity Build Settings
  • Click on Player Settings and configure the project in the Inspector window.
Unity Player Settings
  • Pay attention to some options: Turn off the Auto Graphics API and make sure that OpenGLES2 is selected for the Graphics API option.
  • Type the Bundle Identifier.
  • For Android devices, make sure that the Minimum API Level selected is API 9 or greater. You'll also need to use ARMv7 for the Device Filter option.
  • If you followed the steps correctly, the project is ready to be built. However, if this is the first time that you're compiling a Unity project for Android or iOS, you have to configure Unity for those devices. Follow this guide for Android and this for iOS.
  • To run the project, go back to Build Settings and click on Build and Run.

After the building, the application will be installed on your device. For now, all that you should expect is to see the camera feed on your device without any error. If you've got that, everything worked properly.

3.3. Using the ARCamera Prefab

The objective of the Shoot the Cubes game is to search out and shoot flying cubes using the device's camera and sensors. This approach is similar to the one used on Pokémon GO. To accomplish this, we'll only need to use the Vuforia ARCamera prefab.

There are lots of scripts attached to the ARCamera. For now, the only one that you'll need to understand is the Vuforia Behavior script. Let's take a look at its options:

  • App License Key: Where the Vuforia license key should be inserted.
  • Camera Device Mode: Controls the render quality of the objects.
  • Max Simultaneous Tracked Images: Defines the maximum targets tracked at the same time. Vuforia doesn’t recommend more than five at once.
  • Max Simultaneous Tracked Objects: Defines the maximum objects tracked at the same time. Again, Vuforia doesn’t recommend more than five at the same time.
  • Load Object Targets on Detection: Loads the object associated with the target as soon as the target is detected.
  • Camera Direction: Chose which device camera to use.
  • Mirror Video Background: Defines if the camera feed should be mirrored.
  • Word Center Mode: The most relevant option for our project. It defines how the system should locate the center of the world. 
    • SPECIFIC_TARGET: Uses a specific target as a reference to the world.
    • FIRST_TARGET: The first target detected will be used as a reference to the world.
    • CAMERA: Uses the camera as a reference point to the world.
    • DEVICE_TRACKING: Uses the device’s sensor as a reference to set the world’s positions. This is the option that we need to choose for our little project.

For now, all that you'll need to change in the ARCamera is the Word Center Mode. Click on the ARCamera element in the hierarchy and in the Inspector pane, change the World Center Mode to DEVICE_TRACKING.

3.4. Using the Device's Sensor to Find the Center of the World

Let's add a cube to the stage and test if the ARCamera is working correctly.

  • Make sure that ARCamera's position and rotation are set to 0 on the X, Y, and Z axes.
ARCamera Transform Options
  • Create a Cube object from Game Object > 3D Object > Cube.
Create a Cube Object
  • Move the cube Position on the Z axis to 10 and 0 on the X and Y.
  • Scale the object to 2 on the XY, and Z axis.
  • Rotate the cube 45 degrees on the X and Y axis.
Change the Cube Position Rotation and Scale
  • You can press the play button to check if the cube is positioned correctly.
  • Once you're certain that the cube is positioned correctly, build the project again and test it on the device. To build, go to File > Build and Run.

You'll have to look around by rotating your device to find the cube. You'll notice that the object remains still in the same place, even after you rotate the device away from it. It's as if the cube 'exists' in the real world, but can only be seen with the device camera.

The cube remain in place even after the device rotates

3.5. Setting the Elements' Position According to ARCamera

The first problem with our application so far is that the cube may appear anywhere and the user will have to look around to find it. Since the center of the world is defined according to the device's sensors, we cannot be sure of the actual position of the elements. This is because the user might start off with the device in any orientation, and because the way rotation is measured varies from device to device.

In order to make sure that the AR entities start off in view of the user, the easiest approach is to wait for Vuforia to define the center of the world and to find the ARCamera rotation and then to arrange the starting location of elements according to that orientation.

We'll create a Spawn Manager to define the position of the cubes to be spawned. The manager will define its position according to the ARCamera rotation. It will wait until the rotation is set, and then move 10 units to the front of the camera.

  • Create two empty objects with Game Object > Create Empty. Right click on one of the objects you just created and rename it to _SpawnController.
  • Change the name of the other empty object to _GameManager.
  • In the Project window, select the Assets folder and create a new folder called Scripts.
  • In the Scripts folder, create a C# script called SpawnScript.
  • Drag the SpawnScript to the _SpawnController.
  • Double click on SpawnScript to edit it.

First let's add the Vuforia package.

To access ARCamera, use Camera.main. Let's create a function to get the camera position and set the cube to be spawned 10 units forward from this point.

We'll change the position only once from the Start function. ChangePosition is a coroutine that will wait a small amount of time before setting the position.

Let's test the script:

  • Back in Unity, click on the _SpawnController object and use Game Object > 3D Object > Sphere to insert a sphere inside _SpawnController
  • Select the sphere and make sure its position is set to 0 on the X,Y, and Z axis. 
  • Now we'll overlap the cube and _SpawnController so you can notice the importance of the script. Select _SpawnController and set its position to 0 on the X and Y axis and to 10 on the Z axis, the same position as the cube. 

The elements start out overlapping; however, once you build and run the application on a device, you'll see that the _SpawnController and its sphere will appear in front of the camera, and the cube will be in another place. Go ahead and test it! Make sure you're looking at the device right when the app starts.

4. Conclusion

Congratulations, you've created your first Augmented Reality experience. Yes, it's a little rough, but it is working! In this tutorial you've learned how to use Vuforia's main prefab in Unity, the ARCamera. You also learned how to configure it and how to use the device sensors to create the illusion that a virtual object is inserted into the world.

4.1. What's Next?

In the next tutorial we'll improve this principle to create a real game and a more engaging experience. We'll continue to develop the Shoot the Cubes game, adding some interactivity and exploring Unity's possibilities for creating an interesting AR Game. We'll make the cubes spawn and fly around, and we'll let the player search and destroy them by shooting a laser out of the device.

See you soon!

Special thanks for the image vector designed by Freepik, licensed under Creative Commons CC BY-SA.

2016-09-28T12:59:40.000Z2016-09-28T12:59:40.000ZTin Megali

Animate Your React Native App

$
0
0

Animation is an important part of user experience design. It serves as feedback on user actions, informs users of system status, and guides them on how to interact with the interface. 

One of the tools that I'm using to create cross-platform mobile apps is React Native, so in this tutorial I'll walk you through how to implement animations in this platform. The final output for this tutorial will be a kitchen sink app that implements different kinds of animations. Here's how it will look:

React Native Animations Kitchen Sink App

I'll be assuming that you already know the basics of working with React Native, so I won't be delving too much into the code that doesn't have something to do with animations. For more background on React Native, check out some of my other tutorials.

We will be specifically working on the Android platform, but the code used in this tutorial should work on iOS as well. In fact, if you don't want to deal with the pain of setting up a new React Native project, I recommend that you check out React Native Web Starter. This allows you to create a new React Native project that you can preview in the browser. This comes with the benefit of not having to set up a device, and faster hot reloading so you can preview your changes faster.

Your First Animation App

If you haven't done so already, create a new React Native project:

If you're using React Native Web Starter, here's how you create a new project:

Open the index.android.js (or index.web.js) file, remove the default code, and add the following:

If you're on React Native for Web, you can skip the above step as the default code is already set up to use the App component.

Create an app/components folder and inside create an App.js file. This will be the primary file that we're going to work with. Once you've created the file, you can go ahead and import the packages that you will need for the whole project.

If you've done any sort of React Native development before, you should already be pretty familiar with the following components. If not, take a look at the React Native API docs.

These are the packages that are specifically used for implementing animations:

Here's a brief overview of each one:

  • Animated: allows us to create animated components. React Native has a clear separation between animated and static components. Specifically, you can create animated views (<Animated.View>), text (<Animated.Text>), and images (<Animated.Image>).
  • Easing: a general container of constant values for easing animations. 
  • LayoutAnimation: for executing different kinds of animations whenever the layout changes (e.g. when the state is updated).
  • UIManager: currently, LayoutAnimation is still an experimental feature on Android. Importing UIManager allows us to enable it. For iOS, LayoutAnimation works by default, so you don't need to import UIManager.

Rotate Animation

The first step in creating an animation is to define an animated value. This is commonly done inside the component constructor. In the code below, we're defining a new animated value for the App component constructor. Note that the name of this value can be anything as long as it describes the animation that you want to create. 

In React Native, you can create a new animated value by calling the Value() method in the Animated class. Then supply the initial animated value as the argument. 

Next, create the function that will execute the rotate animation. 

On the first line, we need to set the initial value of the animated value that we want to work with. In this case, we're setting it to 0. 

Next, we create a new timing animation by calling the Animated.timing() function. This accepts the current animated value as its first argument and an object containing the animation config as its second. The object should contain the final value for the animated value, the duration (in milliseconds), and the type of easing animation. 

Finally, call the start() method to start the animation.

The final step is to actually implement the animation. Inside your render() method, define how the rotation value will be changed. This can be done by calling the interpolate() function. It accepts an object containing an inputRange and outputRange. inputRange is an array containing the initial and final rotation value. outputRange is an array containing the actual rotation values. 

So initially the object to be animated will be at 0 degrees rotation, and the final value will be 360 degrees. This rotation is done over the course of 1,500 milliseconds, as defined earlier in the animation config.

When you render the component, the rotation value is added as a transform in the styles. So if you're familiar with CSS animations, this is the equivalent implementation in React Native.

Now that you know the basics of creating animations, let's create a few more so you know how to implement different kinds. Inside your constructor(), create an object containing the animations that we'll implement:

Don't worry if you don't know what each one does—I'm going to walk you through them all. All you need to know for now is that this configuration states whether an animation is currently enabled or not. Once it's been initialized, add the animations array to the state:

In your render() function, add the components that we'll be animating as well as the list of animations.

The renderAnimationsList() function renders the list of animations using Switch and Text components. 

Switch allows the user to toggle animations on and off. Every time the user flips the switch, the toggleAnimation() function gets executed. All it does is find the animation in question and update the value of the enabled property to the selected value. It then updates the state with the updated values and loops through all the animations, executing only the enabled ones.

Also add the styles that will be used throughout the app.

Scale Animation

Scale animation is where you make an object bigger or smaller than its original size. Start by creating a new animated value inside the constructor:

Create the function for animating the scale. This looks similar to the spin() function; the only difference is the easing function that we're using. Here we're using easeOutBack to make the scaling more fluid. This is useful especially if this animation is executed repeatedly. If you want to know what other easing functions you can use, check out easings.net. All of the easings listed there can be used in React Native.

The other thing that's new in the function above is that we're passing in a function as an argument to the start() function. This function gets executed when the animation is done. Here we're checking if the animation is enabled, and if it is, we call the same function again. This allows us to execute the animation repeatedly as long as it's enabled.

Then, in your render() function, configure the scaling interpolation. This time, we have three values for the input and output range to create a pulsing effect, like a heartbeat. This allows us to create a scale animation that doesn't abruptly make an object bigger or smaller. The highest output range is 7, so the object will be seven times bigger than its original size.

To conserve space, just add the scale transform on the same component that we used earlier:

With those two transforms added, you can now enable both the spin and scale animation to execute them at the same time.

By now you should have noticed the patterns that allow us to create animations. Lots of code is repeated when doing animations. Best practice would be to create functions that encapsulate repeated code, but to keep things simple and easy to understand, let's stick with the raw code for the rest of the animations.

Opacity Animation

Now let's try to animate the opacity of a component. By now you should be pretty familiar with where each piece of code goes, so I'm no longer going to mention where you will place each one. But in case you get confused, you can simply look at the code on GitHub:

Create a function for changing the opacity. When changing the opacity, a linear easing function is the best fit since it's the most straightforward one.

Change the opacity from visible to transparent and then visible again over the course of three seconds.

Create a new component whose opacity will be controlled:

Color Value

Next, let's try to animate the background color of a component:

This time, we're animating over the course of five seconds:

We have three colors to work with. The initial color is yellow, and then after a few seconds, it will completely turn to orange, and then to red. Note that the colors won't abruptly change; all the colors between the colors that you specified will be shown as well. React Native automatically computes the color values between the ones that you specified. You can make the duration longer if you want to see how the color changes over time. 

Just like the opacity, the interpolated value is added as a style:

Parallel Animations

You can say that we've already executed animations in parallel. But that's just a side effect of having different transforms attached to a single component. If you want to execute multiple animations all at the same time, you need to use the parallel() function from the Animated API. This accepts an array of animation functions to execute. In the example below, we have two animated values, one for each component that we want to animate.

In the animation function, we set the initial animated values as usual. But below it, we're using Animated.parallel() to group all the animations that we want to perform. In this case, we only have two timing animations, which execute for two seconds. Also notice that we're not calling the start() method on each animation. Instead, we're using it after declaring the parallel animation. This allows us to start the animations simultaneously. 

For the interpolation to make sense, first check the style that we added for the two boxes earlier:

The blue box is aligned using flex-start, which means that it's aligned to the left. The green box is flex-end, which is aligned to the right. (At least, this is how they work if the container has a flexDirection of column. Otherwise, it's a different story.) 

With this knowledge, we can now move the boxes anywhere we want. But for this tutorial, all we want to do is move the boxes to the opposite of their initial positions. So the blue box moves to the right, and the green box moves to the left. This is where the device dimension data comes in. We're using the width of the device to calculate the final interpolation value so that the box won't go out of bounds. 

In this case, we're simply subtracting 50 from the device width to make the blue box go to the right. And for the green box, we're converting the device width to its negative equivalent so it moves to the left. You might be wondering, why 50? This is because the size of each box is 50. The box will still go out of bounds if we don't subtract its own size from the device width. 

Lastly, add the components to be animated. The transform in question is translateX, which allows us to change the position of an object in the X-axis in order to move it horizontally.

Aside from parallel animations, there are also the sequence and stagger animations. 

The implementation of these is similar to parallel animations in the sense that they all accept an array of animations to be executed. But the defining factor for sequence animations is that the animations you've supplied in the array will be executed in sequence. You can also add optional delays to each animation if you want. 

On the other hand, a stagger animation is a combination of parallel and sequence animations. This is because it allows you to run animations both in parallel and in sequence. Here's a pen which demonstrates stagger animations.

Layout Animation

Another tool that React Native provides for implementing animations is LayoutAnimation. This allows you to animate views into their new positions when the next layout happens. Layout changes usually happen when you update the state. This results in having a specific UI component either be added, updated, or removed from the screen. 

When these events happen, LayoutAnimation takes care of animating the component concerned. For example, in a to-do list app, when you add a new to-do item, it will automatically add a spring animation to spring the new item into existence.

Let's add a LayoutAnimation into the kitchen sink app. As mentioned earlier, you'll need to import LayoutAnimationPlatform, and UIManager into the app. Then, in your constructor(), add the code for enabling LayoutAnimation on Android:

(On iOS, LayoutAnimation should work by default. If you're using React Native for Web, LayoutAnimation is not supported, so you'll need to have the app exported to either Android or iOS, and try it from there.)

Next, right below the ScrollView that contains the animations list, add a button for generating squares that will be shown on the screen:

Basically what this will do is to generate three small squares every time the user taps on the Add Squares button. 

Here's the function for adding squares:

The idea is to call the LayoutAnimation.configureNext() before you update the state. This accepts the animation that you want to use. Out of the box, LayoutAnimation comes with three presets: linearspring, and easeInEaseOut. These should work for most cases, but if you need to customize the animations, you can read the documentation on LayoutAnimation to learn how to create your own.

Inside the render() function, create a for loop that will render the squares. The number of squares to be generated depends on the current value of squares in the state.

The renderSquare() function is the one that's actually rendering the squares:

Third-Party Libraries

React Native's Animated API is very robust and customizable, but as you have seen so far, this comes with the disadvantage of having to write a lot of code just to implement very simple animations. So in this final section, I'll introduce you to two third-party libraries that will allow you to implement common animations with less code.

Animating Numbers

If you're creating an app which needs to animate numbers (e.g. a stopwatch or counter app), you can use the built-in setInterval() function to update the state on a set interval and then implement the animations yourself. 

Or if you want, you can use the Animate Number library. This allows you to easily implement number animations, such as customizing the transition every time the number is updated. You can install it with the following command:

Once installed, import it into your app:

Then use it as a component:

What the above code does is count up to 100 starting from 0. 

General-Purpose Animations

If you want to implement general-purpose animations such as the ones offered by the animate.css library, there's an equivalent library for React Native called Animatable. You can install it with the following command:

Once installed, import it with the following code:

Here's an example using the code that we added earlier for our layout animation. All you have to do is use <Animatable.View> instead of <Animated.View> and then add a ref so we can refer to this component using JavaScript code.

Next, create a resetSquares() method. This will remove all the squares that are currently on the screen. Use this.refs.squares to refer to the squares container, and then call the zoomOutUp() function to animate it out of view using a zoom out animation with the up direction. And don't forget to update the state after the animation has completed. This is a common pattern when implementing animations. Do the animation before updating the state.

The same is true with the addSquares() method. But this time, we're animating the squares container back in. And instead of executing the animation first, we're doing it right after the state has been updated. This is because the squares container isn't really displayed unless it has a child. So here we're breaking the rule that the animation should be executed first.

Conclusion

That's it! In this article, you've learned the basics of creating animations in React Native. Animations can be implemented using the Animated API, LayoutAnimations, and third-party libraries. 

As you have seen, creating animations can take a considerable amount of code, even for simple ones such as a scaling animation. This comes with the benefit of allowing you to customize the animations any way you want. 

However, if you don't want to deal with too much code, you can always use third-party React Native libraries specifically created for easily implementing animations. You can find the full source code used in this tutorial on GitHub.

Further Reading

  • React Native Animations Using the Animated API: a beginner-friendly guide on implementing different kinds of animations in React Native. This tutorial covers sequence and stagger animations if you want to know more about them.
  • React Native Animation Book: still a work in progress but nevertheless a valuable resource. It has almost anything you want to know about animations in React Native—for example, if you want to animate something on user scroll, or if you want to drag objects around.
  • React Native Docs - Animations: if you want to know the specific details of how to implement animations in React Native.
  • Animation in Mobile UX Design: not exactly related to React Native, but to mobile app animation in general. This is a good read for both UX designers and developers, to have a general idea on how to show meaningful animations to users.

Finally, if you want to learn more about CSS animation, check out some of our video courses.

2016-10-03T14:18:33.000Z2016-10-03T14:18:33.000ZWernher-Bel Ancheta

Animate Your React Native App

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-27328

Animation is an important part of user experience design. It serves as feedback on user actions, informs users of system status, and guides them on how to interact with the interface. 

One of the tools that I'm using to create cross-platform mobile apps is React Native, so in this tutorial I'll walk you through how to implement animations in this platform. The final output for this tutorial will be a kitchen sink app that implements different kinds of animations. Here's how it will look:

React Native Animations Kitchen Sink App

I'll be assuming that you already know the basics of working with React Native, so I won't be delving too much into the code that doesn't have something to do with animations. For more background on React Native, check out some of my other tutorials.

We will be specifically working on the Android platform, but the code used in this tutorial should work on iOS as well. In fact, if you don't want to deal with the pain of setting up a new React Native project, I recommend that you check out React Native Web Starter. This allows you to create a new React Native project that you can preview in the browser. This comes with the benefit of not having to set up a device, and faster hot reloading so you can preview your changes faster.

Your First Animation App

If you haven't done so already, create a new React Native project:

If you're using React Native Web Starter, here's how you create a new project:

Open the index.android.js (or index.web.js) file, remove the default code, and add the following:

If you're on React Native for Web, you can skip the above step as the default code is already set up to use the App component.

Create an app/components folder and inside create an App.js file. This will be the primary file that we're going to work with. Once you've created the file, you can go ahead and import the packages that you will need for the whole project.

If you've done any sort of React Native development before, you should already be pretty familiar with the following components. If not, take a look at the React Native API docs.

These are the packages that are specifically used for implementing animations:

Here's a brief overview of each one:

  • Animated: allows us to create animated components. React Native has a clear separation between animated and static components. Specifically, you can create animated views (<Animated.View>), text (<Animated.Text>), and images (<Animated.Image>).
  • Easing: a general container of constant values for easing animations. 
  • LayoutAnimation: for executing different kinds of animations whenever the layout changes (e.g. when the state is updated).
  • UIManager: currently, LayoutAnimation is still an experimental feature on Android. Importing UIManager allows us to enable it. For iOS, LayoutAnimation works by default, so you don't need to import UIManager.

Rotate Animation

The first step in creating an animation is to define an animated value. This is commonly done inside the component constructor. In the code below, we're defining a new animated value for the App component constructor. Note that the name of this value can be anything as long as it describes the animation that you want to create. 

In React Native, you can create a new animated value by calling the Value() method in the Animated class. Then supply the initial animated value as the argument. 

Next, create the function that will execute the rotate animation. 

On the first line, we need to set the initial value of the animated value that we want to work with. In this case, we're setting it to 0. 

Next, we create a new timing animation by calling the Animated.timing() function. This accepts the current animated value as its first argument and an object containing the animation config as its second. The object should contain the final value for the animated value, the duration (in milliseconds), and the type of easing animation. 

Finally, call the start() method to start the animation.

The final step is to actually implement the animation. Inside your render() method, define how the rotation value will be changed. This can be done by calling the interpolate() function. It accepts an object containing an inputRange and outputRange. inputRange is an array containing the initial and final rotation value. outputRange is an array containing the actual rotation values. 

So initially the object to be animated will be at 0 degrees rotation, and the final value will be 360 degrees. This rotation is done over the course of 1,500 milliseconds, as defined earlier in the animation config.

When you render the component, the rotation value is added as a transform in the styles. So if you're familiar with CSS animations, this is the equivalent implementation in React Native.

Now that you know the basics of creating animations, let's create a few more so you know how to implement different kinds. Inside your constructor(), create an object containing the animations that we'll implement:

Don't worry if you don't know what each one does—I'm going to walk you through them all. All you need to know for now is that this configuration states whether an animation is currently enabled or not. Once it's been initialized, add the animations array to the state:

In your render() function, add the components that we'll be animating as well as the list of animations.

The renderAnimationsList() function renders the list of animations using Switch and Text components. 

Switch allows the user to toggle animations on and off. Every time the user flips the switch, the toggleAnimation() function gets executed. All it does is find the animation in question and update the value of the enabled property to the selected value. It then updates the state with the updated values and loops through all the animations, executing only the enabled ones.

Also add the styles that will be used throughout the app.

Scale Animation

Scale animation is where you make an object bigger or smaller than its original size. Start by creating a new animated value inside the constructor:

Create the function for animating the scale. This looks similar to the spin() function; the only difference is the easing function that we're using. Here we're using easeOutBack to make the scaling more fluid. This is useful especially if this animation is executed repeatedly. If you want to know what other easing functions you can use, check out easings.net. All of the easings listed there can be used in React Native.

The other thing that's new in the function above is that we're passing in a function as an argument to the start() function. This function gets executed when the animation is done. Here we're checking if the animation is enabled, and if it is, we call the same function again. This allows us to execute the animation repeatedly as long as it's enabled.

Then, in your render() function, configure the scaling interpolation. This time, we have three values for the input and output range to create a pulsing effect, like a heartbeat. This allows us to create a scale animation that doesn't abruptly make an object bigger or smaller. The highest output range is 7, so the object will be seven times bigger than its original size.

To conserve space, just add the scale transform on the same component that we used earlier:

With those two transforms added, you can now enable both the spin and scale animation to execute them at the same time.

By now you should have noticed the patterns that allow us to create animations. Lots of code is repeated when doing animations. Best practice would be to create functions that encapsulate repeated code, but to keep things simple and easy to understand, let's stick with the raw code for the rest of the animations.

Opacity Animation

Now let's try to animate the opacity of a component. By now you should be pretty familiar with where each piece of code goes, so I'm no longer going to mention where you will place each one. But in case you get confused, you can simply look at the code on GitHub:

Create a function for changing the opacity. When changing the opacity, a linear easing function is the best fit since it's the most straightforward one.

Change the opacity from visible to transparent and then visible again over the course of three seconds.

Create a new component whose opacity will be controlled:

Color Value

Next, let's try to animate the background color of a component:

This time, we're animating over the course of five seconds:

We have three colors to work with. The initial color is yellow, and then after a few seconds, it will completely turn to orange, and then to red. Note that the colors won't abruptly change; all the colors between the colors that you specified will be shown as well. React Native automatically computes the color values between the ones that you specified. You can make the duration longer if you want to see how the color changes over time. 

Just like the opacity, the interpolated value is added as a style:

Parallel Animations

You can say that we've already executed animations in parallel. But that's just a side effect of having different transforms attached to a single component. If you want to execute multiple animations all at the same time, you need to use the parallel() function from the Animated API. This accepts an array of animation functions to execute. In the example below, we have two animated values, one for each component that we want to animate.

In the animation function, we set the initial animated values as usual. But below it, we're using Animated.parallel() to group all the animations that we want to perform. In this case, we only have two timing animations, which execute for two seconds. Also notice that we're not calling the start() method on each animation. Instead, we're using it after declaring the parallel animation. This allows us to start the animations simultaneously. 

For the interpolation to make sense, first check the style that we added for the two boxes earlier:

The blue box is aligned using flex-start, which means that it's aligned to the left. The green box is flex-end, which is aligned to the right. (At least, this is how they work if the container has a flexDirection of column. Otherwise, it's a different story.) 

With this knowledge, we can now move the boxes anywhere we want. But for this tutorial, all we want to do is move the boxes to the opposite of their initial positions. So the blue box moves to the right, and the green box moves to the left. This is where the device dimension data comes in. We're using the width of the device to calculate the final interpolation value so that the box won't go out of bounds. 

In this case, we're simply subtracting 50 from the device width to make the blue box go to the right. And for the green box, we're converting the device width to its negative equivalent so it moves to the left. You might be wondering, why 50? This is because the size of each box is 50. The box will still go out of bounds if we don't subtract its own size from the device width. 

Lastly, add the components to be animated. The transform in question is translateX, which allows us to change the position of an object in the X-axis in order to move it horizontally.

Aside from parallel animations, there are also the sequence and stagger animations. 

The implementation of these is similar to parallel animations in the sense that they all accept an array of animations to be executed. But the defining factor for sequence animations is that the animations you've supplied in the array will be executed in sequence. You can also add optional delays to each animation if you want. 

On the other hand, a stagger animation is a combination of parallel and sequence animations. This is because it allows you to run animations both in parallel and in sequence. Here's a pen which demonstrates stagger animations.

Layout Animation

Another tool that React Native provides for implementing animations is LayoutAnimation. This allows you to animate views into their new positions when the next layout happens. Layout changes usually happen when you update the state. This results in having a specific UI component either be added, updated, or removed from the screen. 

When these events happen, LayoutAnimation takes care of animating the component concerned. For example, in a to-do list app, when you add a new to-do item, it will automatically add a spring animation to spring the new item into existence.

Let's add a LayoutAnimation into the kitchen sink app. As mentioned earlier, you'll need to import LayoutAnimationPlatform, and UIManager into the app. Then, in your constructor(), add the code for enabling LayoutAnimation on Android:

(On iOS, LayoutAnimation should work by default. If you're using React Native for Web, LayoutAnimation is not supported, so you'll need to have the app exported to either Android or iOS, and try it from there.)

Next, right below the ScrollView that contains the animations list, add a button for generating squares that will be shown on the screen:

Basically what this will do is to generate three small squares every time the user taps on the Add Squares button. 

Here's the function for adding squares:

The idea is to call the LayoutAnimation.configureNext() before you update the state. This accepts the animation that you want to use. Out of the box, LayoutAnimation comes with three presets: linearspring, and easeInEaseOut. These should work for most cases, but if you need to customize the animations, you can read the documentation on LayoutAnimation to learn how to create your own.

Inside the render() function, create a for loop that will render the squares. The number of squares to be generated depends on the current value of squares in the state.

The renderSquare() function is the one that's actually rendering the squares:

Third-Party Libraries

React Native's Animated API is very robust and customizable, but as you have seen so far, this comes with the disadvantage of having to write a lot of code just to implement very simple animations. So in this final section, I'll introduce you to two third-party libraries that will allow you to implement common animations with less code.

Animating Numbers

If you're creating an app which needs to animate numbers (e.g. a stopwatch or counter app), you can use the built-in setInterval() function to update the state on a set interval and then implement the animations yourself. 

Or if you want, you can use the Animate Number library. This allows you to easily implement number animations, such as customizing the transition every time the number is updated. You can install it with the following command:

Once installed, import it into your app:

Then use it as a component:

What the above code does is count up to 100 starting from 0. 

General-Purpose Animations

If you want to implement general-purpose animations such as the ones offered by the animate.css library, there's an equivalent library for React Native called Animatable. You can install it with the following command:

Once installed, import it with the following code:

Here's an example using the code that we added earlier for our layout animation. All you have to do is use <Animatable.View> instead of <Animated.View> and then add a ref so we can refer to this component using JavaScript code.

Next, create a resetSquares() method. This will remove all the squares that are currently on the screen. Use this.refs.squares to refer to the squares container, and then call the zoomOutUp() function to animate it out of view using a zoom out animation with the up direction. And don't forget to update the state after the animation has completed. This is a common pattern when implementing animations. Do the animation before updating the state.

The same is true with the addSquares() method. But this time, we're animating the squares container back in. And instead of executing the animation first, we're doing it right after the state has been updated. This is because the squares container isn't really displayed unless it has a child. So here we're breaking the rule that the animation should be executed first.

Conclusion

That's it! In this article, you've learned the basics of creating animations in React Native. Animations can be implemented using the Animated API, LayoutAnimations, and third-party libraries. 

As you have seen, creating animations can take a considerable amount of code, even for simple ones such as a scaling animation. This comes with the benefit of allowing you to customize the animations any way you want. 

However, if you don't want to deal with too much code, you can always use third-party React Native libraries specifically created for easily implementing animations. You can find the full source code used in this tutorial on GitHub.

Further Reading

  • React Native Animations Using the Animated API: a beginner-friendly guide on implementing different kinds of animations in React Native. This tutorial covers sequence and stagger animations if you want to know more about them.
  • React Native Animation Book: still a work in progress but nevertheless a valuable resource. It has almost anything you want to know about animations in React Native—for example, if you want to animate something on user scroll, or if you want to drag objects around.
  • React Native Docs - Animations: if you want to know the specific details of how to implement animations in React Native.
  • Animation in Mobile UX Design: not exactly related to React Native, but to mobile app animation in general. This is a good read for both UX designers and developers, to have a general idea on how to show meaningful animations to users.

Finally, if you want to learn more about CSS animation, check out some of our video courses.

2016-10-03T14:18:33.000Z2016-10-03T14:18:33.000ZWernher-Bel Ancheta

Create a Pokémon GO Style Augmented Reality Game With Vuforia: Part 2

$
0
0
What You'll Be Creating

In the last post of this series, we learned how to set up Vuforia and start developing an AR game from scratch, adopting a logic similar to the one used on Pokémon GO!

We've started development of an Augmented Reality game called Shoot the Cubes. Now it's time to improve the game by adding interaction and making the experience more engaging. We'll concentrate mostly on the possibilities Unity gives us, leaving aside Vuforia's specifics. Experience with the Unity engine is not mandatory.

1. Making the Cubes Look Alive

Let's start working on our game. So far we've managed to create an Augmented Reality scene that moves with the user's device. We'll improve this app by making the cubes spawn and fly around, and by letting the player search and destroy them with a laser shot.

1.1. Spawning the Elements

We've already set an initial position of the _SpawnController according to the device camera rotation. Now we'll establish an area around this point where our cubes will spawn. Now we'll update the SpawnScript to make the _SpawnController instantiate the cube elements with different sizes and random positions, relative to the _SpawnController.

Let's edit the SpawnScript class, adding some variables to control the spawning process.

We'll create a coroutine called SpawnLoop to manage the spawn process. It will also be responsible to set the initial position of the _SpawnController when the game begins. Notice that the Random.insideUnitSphere method causes the cubes to be instantiated at random locations within a spherical area around the _SpawnManager.

Finally, edit the Start() function. Make sure to remove the  StartCoroutine( ChangePosition() );  line and replace it with a call to start the SpawnLoop coroutine.

Now back in Unity you'll have to create a cube prefab to be instantiated by the script. 

  • Create a folder called Prefabs in the Project window.
  • Change the Cube element scale on all axes to 1 and change the rotation to 0 on all axes.
  • Drag the Cube to the Prefabs folder.
  • Delete the Cube from the Hierarchy window.
  • Select the _SpawnController and drag the Cube prefab from the Prefabs folder to the M Cube Obj field, located in the SpawnScript area of the inspector.


Finally, delete the Sphere located inside the _SpawnManager.

Now, if you press play in Unity and and run the project on the device, you should see the cubes spawning.

1.2. Rotating Cubes

We need to add movement to those cubes to make things more interesting. Let's rotate the cubes around their axes and over the ARCamera. It would be also cool to add some random factor over the cube movement to create a more organic feel.

Drag the Cube Prefab from the Prefabs folder to the hierarchy.

  • On the Scripts folder create a new C# Script called CubeBehaviorScript.
  • Drag the Script to the Cube prefab and open it to edit.

Each cube will have random characteristics. The size, rotation speed and rotation direction will be defined randomly, using some references previously defined. Let's create some controller variables and initialize the state of the cube.

Now it's time to add some movement to our cube. Let's make it rotate around itself and around the ARCamera, using the random speed and direction defined earlier.

To make it more organic, the cube will scale up from size zero after it is spawned.

2. Searching and Destroying

Those cubes are too happy flying around like that. We must destroy them with lasers! Let's create a gun in our game and start shooting them.

2.1. Shooting a Laser

The laser shot must be connected to the ARCamera and its rotation. Every time the player 'taps' the device's screen, a laser will be shot. We'll use the Physics.Raycast class to check if the laser hit the target and if so we'll remove some health from it.

  • Create a new empty object named _PlayerController and another empty object called _LaserController inside of it. 
  • Add a C# script called LaserScript inside the Scripts folder and drag it to _LaserController.

Inside LaserScript, we'll use a LineRenderer to show the laser ray, using an origin point connected to the bottom of the ARCamera. To get the origin point of the laser ray—the barrel of the virtual gun—we'll get the camera Transform at the moment when a laser is shot and move it 10 units down. 

First, let's create some variables to control the laser settings and get mLaserLine.

The function responsible for shooting is Fire(). That will be called every time the player presses the fire button. Notice that Camera.main.transform is being used to get the ARCamera position and rotation and that the laser is positioned 10 units below that. This positions the laser at the bottom of the camera.

To check if the target was hit, we'll use a Raycast.

At last, it's time to check if the fire button was pressed and call the laser effects when the shot is fired.

Back in Unity we'll need to add a LineRenderer component to the _LaserController object.

  • Select _LaserController and in the Inspector window, click on Add Component. Name it "Line Renderer" and select the new component.
  • Create a new folder called Materials, and create a new material inside of it called Laser.
  • Select the Laser material and change it to any color that you like.
  • Select the _LaserController and drag the Laser material to the Materials field of the LineRenderer component.
  • Still in LineRenderer, under Parameters set Start With to 1 and End With to 0.

If you test the game now you should see a laser being shot from the bottom of the screen. Feel free to add an AudioSource component with a laser sound effect to _LaserController to spice it up.

2.1. Hitting the Cubes

Our lasers need to hit their targets, apply damage and eventually destroy the cubes. We'll need to add a RigidBody to the cubes, applying force and damage to them.

  • Drag the Cube prefab from the prefabs folder to any place on the Stage.
  • Select the Cube and select Add Component in the Inspector window. Name the new component "RigidBody" and select it.
  • On the RigidBody component set Gravity and Is Kinematic to Off.
  • Apply those changes to the prefab.

Now let's edit the CubeBehavior script to create a function responsible for applying damage to the cube and another one for destroying it when the health falls below 0.

Okay, the cube can take now damage and be destroyed. Let's edit the LaserScript to apply damage to the cube. All we have to do is change the Fire() function to call the  Hit method of the CubeBehavior script.

3. Conclusion

Congratulations! Our Augmented Reality game is finished! Yes, the game could be more polished, but the basics are there and the overall experience is pretty engaging. To make it more interesting you could add a particle explosion, like I did in the video, and on top of that, you could add a score system or even a wave system with a timer to make it more of a challenge. The next steps are up to you!

3.1. What's next?

We created an interesting AR experiment using Vuforia on Unity, however we still have a lot of cool features to cover. We didn't see any of the more sophisticated Vuforia resources: Targets, Smart Terrain, Cloud, and so on. Stay tuned for the next tutorials, where we'll cover more of those features, always using the same step-by-step approach.

See you soon!

2016-10-05T23:47:23.000Z2016-10-05T23:47:23.000ZTin Megali

Create a Pokémon GO Style Augmented Reality Game With Vuforia: Part 2

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-27232
What You'll Be Creating

In the last post of this series, we learned how to set up Vuforia and start developing an AR game from scratch, adopting a logic similar to the one used on Pokémon GO!

We've started development of an Augmented Reality game called Shoot the Cubes. Now it's time to improve the game by adding interaction and making the experience more engaging. We'll concentrate mostly on the possibilities Unity gives us, leaving aside Vuforia's specifics. Experience with the Unity engine is not mandatory.

1. Making the Cubes Look Alive

Let's start working on our game. So far we've managed to create an Augmented Reality scene that moves with the user's device. We'll improve this app by making the cubes spawn and fly around, and by letting the player search and destroy them with a laser shot.

1.1. Spawning the Elements

We've already set an initial position of the _SpawnController according to the device camera rotation. Now we'll establish an area around this point where our cubes will spawn. Now we'll update the SpawnScript to make the _SpawnController instantiate the cube elements with different sizes and random positions, relative to the _SpawnController.

Let's edit the SpawnScript class, adding some variables to control the spawning process.

We'll create a coroutine called SpawnLoop to manage the spawn process. It will also be responsible to set the initial position of the _SpawnController when the game begins. Notice that the Random.insideUnitSphere method causes the cubes to be instantiated at random locations within a spherical area around the _SpawnManager.

Finally, edit the Start() function. Make sure to remove the  StartCoroutine( ChangePosition() );  line and replace it with a call to start the SpawnLoop coroutine.

Now back in Unity you'll have to create a cube prefab to be instantiated by the script. 

  • Create a folder called Prefabs in the Project window.
  • Change the Cube element scale on all axes to 1 and change the rotation to 0 on all axes.
  • Drag the Cube to the Prefabs folder.
  • Delete the Cube from the Hierarchy window.
  • Select the _SpawnController and drag the Cube prefab from the Prefabs folder to the M Cube Obj field, located in the SpawnScript area of the inspector.


Finally, delete the Sphere located inside the _SpawnManager.

Now, if you press play in Unity and and run the project on the device, you should see the cubes spawning.

1.2. Rotating Cubes

We need to add movement to those cubes to make things more interesting. Let's rotate the cubes around their axes and over the ARCamera. It would be also cool to add some random factor over the cube movement to create a more organic feel.

Drag the Cube Prefab from the Prefabs folder to the hierarchy.

  • On the Scripts folder create a new C# Script called CubeBehaviorScript.
  • Drag the Script to the Cube prefab and open it to edit.

Each cube will have random characteristics. The size, rotation speed and rotation direction will be defined randomly, using some references previously defined. Let's create some controller variables and initialize the state of the cube.

Now it's time to add some movement to our cube. Let's make it rotate around itself and around the ARCamera, using the random speed and direction defined earlier.

To make it more organic, the cube will scale up from size zero after it is spawned.

2. Searching and Destroying

Those cubes are too happy flying around like that. We must destroy them with lasers! Let's create a gun in our game and start shooting them.

2.1. Shooting a Laser

The laser shot must be connected to the ARCamera and its rotation. Every time the player 'taps' the device's screen, a laser will be shot. We'll use the Physics.Raycast class to check if the laser hit the target and if so we'll remove some health from it.

  • Create a new empty object named _PlayerController and another empty object called _LaserController inside of it. 
  • Add a C# script called LaserScript inside the Scripts folder and drag it to _LaserController.

Inside LaserScript, we'll use a LineRenderer to show the laser ray, using an origin point connected to the bottom of the ARCamera. To get the origin point of the laser ray—the barrel of the virtual gun—we'll get the camera Transform at the moment when a laser is shot and move it 10 units down. 

First, let's create some variables to control the laser settings and get mLaserLine.

The function responsible for shooting is Fire(). That will be called every time the player presses the fire button. Notice that Camera.main.transform is being used to get the ARCamera position and rotation and that the laser is positioned 10 units below that. This positions the laser at the bottom of the camera.

To check if the target was hit, we'll use a Raycast.

At last, it's time to check if the fire button was pressed and call the laser effects when the shot is fired.

Back in Unity we'll need to add a LineRenderer component to the _LaserController object.

  • Select _LaserController and in the Inspector window, click on Add Component. Name it "Line Renderer" and select the new component.
  • Create a new folder called Materials, and create a new material inside of it called Laser.
  • Select the Laser material and change it to any color that you like.
  • Select the _LaserController and drag the Laser material to the Materials field of the LineRenderer component.
  • Still in LineRenderer, under Parameters set Start With to 1 and End With to 0.

If you test the game now you should see a laser being shot from the bottom of the screen. Feel free to add an AudioSource component with a laser sound effect to _LaserController to spice it up.

2.1. Hitting the Cubes

Our lasers need to hit their targets, apply damage and eventually destroy the cubes. We'll need to add a RigidBody to the cubes, applying force and damage to them.

  • Drag the Cube prefab from the prefabs folder to any place on the Stage.
  • Select the Cube and select Add Component in the Inspector window. Name the new component "RigidBody" and select it.
  • On the RigidBody component set Gravity and Is Kinematic to Off.
  • Apply those changes to the prefab.

Now let's edit the CubeBehavior script to create a function responsible for applying damage to the cube and another one for destroying it when the health falls below 0.

Okay, the cube can take now damage and be destroyed. Let's edit the LaserScript to apply damage to the cube. All we have to do is change the Fire() function to call the  Hit method of the CubeBehavior script.

3. Conclusion

Congratulations! Our Augmented Reality game is finished! Yes, the game could be more polished, but the basics are there and the overall experience is pretty engaging. To make it more interesting you could add a particle explosion, like I did in the video, and on top of that, you could add a score system or even a wave system with a timer to make it more of a challenge. The next steps are up to you!

3.1. What's next?

We created an interesting AR experiment using Vuforia on Unity, however we still have a lot of cool features to cover. We didn't see any of the more sophisticated Vuforia resources: Targets, Smart Terrain, Cloud, and so on. Stay tuned for the next tutorials, where we'll cover more of those features, always using the same step-by-step approach.

See you soon!

2016-10-05T23:47:23.000Z2016-10-05T23:47:23.000ZTin Megali

Getting Started With Ionic: Cordova

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-27023
Final product image
What You'll Be Creating

In this final installment in the Getting Started with Ionic series, you'll learn how to leverage Cordova to add device hardware capabilities to your app. We'll look at how to use features like geolocation and I'll show you how integrate with the ngCordova library to finish up our app. You'll want to be sure you've set up your machine for Ionic before we begin, and make sure you have your platform-specific tooling set up as well. 

Project Files

The tutorial project files are available on GitHub. The general premise of the app is that it shows some information about facilities near you. In this tutorial, we add the ability to look up the current location and show results near you. You can see the working example here.

If you clone the project, you can also code along by using Git and running git checkout –b start. This coding for this lesson starts where the last post, Getting Started With Ionic: Navigation left off. You can also preview the starting point live.

Setting up Cordova

Before we write any code, we need to set up our project. Ionic will already set up a basic Cordova project, but we need to initialize a few additional things ourselves. 

First, we need to install the ngCordova library. This is an Angular module that makes using some of the most popular Cordova plugins much easier. (Though not all plugins are supported by ngCordova.) Bower is used to install this dependency. 

Second, we need to add a platform to our project. This will be ios or android (or both!), depending on the platform you choose to support. I've used ios in my examples, but you can replace that with android to achieve the same for that platform.

Third, we will install the geolocation plugin for Cordova. This enhances your app with the ability to get a user's current location (which requires permission). You can see a list of all plugins at https://cordova.apache.org/plugins/ with details on how to setup each one.

The following commands should be run from the root of the project to do these three setup steps.

Once that has completed, we need to add ngCordova to the application so it is recognized by Ionic. Open up the www/index.html file and add the following line after the ionic.bundle.js file. 

Then we need to update the www/js/app.js file to include the ngCordova module. The first line should be updated to appear as follows.

Now that we've gotten our dependencies updated, our project is ready to use the geolocation plugin. The next step is to set up a new view to start using it!

Adding the Location View

We'll create a new landing screen for the app. This screen shows a message about using the user's current location to find nearby places. They will tap a button to confirm they wish the app to access their location, and then see the list of places that reflects their current location.

User location data is a privacy concern that apps should be careful when collecting, so mobile devices don't allow apps to automatically have access to geolocation data. Apps have to request permission (which can be declined or revoked at any time) so you need to keep that in mind when using certain plugins that require permissions. (Use that data carefully and avoid violating the privacy of your users!)

First, we will create the template for our location view. Create a new file at www/views/location/location.html and include the following. (This should all be familiar from the previous tutorials, but it essentially creates a new view with a button that will call a method in our controller, which we'll define next.)

We'll now create the controller shell, and then add a method to handle getting a user's location. Create another new file at www/views/location/location.js and include the following code. Make sure to also link to this file in the www/index.html file.

At this point we have a new location view, but the button won't work just yet. You can preview the app in your browser with ionic serve. You should be able to see the view if you go to http://localhost:8100/#/location.

You will notice a service called $cordovaGeolocation in the controller constructor, which is going to provide us the access to user location data. The rest of the services injected are needed just to handle the business logic of what to do with the location data.

There are actually two steps involved in getting the user's location in this app, first is getting the geolocation data from the Cordova plugin (which just returns a latitude and longitude value), and then doing a reverse lookup in the Google Geocode API to find the current location place (which works well with our other API calls).

Add the following method to the controller, and we'll go through it in detail below.

First thing is to use the $ionicLoading service to display a loader while the current location is detected. 

Then we use the $cordovaGeoposition service, which has a method called getPosition to get the current position. We specify the options, which are reasonable default values. You can read about other getPosition options in the documentation.

The getPosition method returns a promise, so we use then to handle the response. Assuming the user agrees to share their location, it gives us an object that contains the coordinates for latitude and longitude. We will then use those values to call the API to do a reverse geocode to lookup the closest place. If it fails, the catch handler will use $ionicPopup to show an alert that it failed.

The $http service is used to look up the reverse geocode, and if it is successful we need to check if any results were returned. If one was found, the Geolocation service value is updated and the user is redirected to the places list using $state.go. Otherwise, it uses $ionicPopup to show an alert saying that no places were found.

That's all we had to do to enable the geolocation feature of the device in our app. However, we still have to make a couple minor tweaks to the places view and to the app.js file.

First open up the www/js/app.js file and update the Geolocation service to the following. This is necessary to clear the default value of the geolocation service which we had previously hard coded to Chicago, and enable the digest cycle to properly sync changes.

Then modify the default view for the app app by updating the last config line to the following. This will make the app start on the location view instead.

Lastly, we want to tweak the places view to no longer cache (so the updated location is always reflected), and to redirect to the location view if no location is found. Open up the www/views/places/places.html and update the first line as follows.

Then open the www/views/places/places.js and update the start of the controller to match the following.

We do a quick check before the controller fires to detect if the geolocation has been found, if not we redirect to the location to reset it.

This will actually also work in your browser, as the browser has support for geolocation features. In the next section we'll briefly talk about how to preview the app in an emulator.

Previewing in an Emulator

The Ionic CLI has a command that lets you easily emulate the app in a software version of a device. Not all hardware features are available, but many are emulated, including geolocation.

Using ionic emulate ios, Ionic will start the emulator and load the app (similarly for Android). If you have errors, it is likely that your machine has not been fully setup for the platform you are trying to emulate.

This command will boot a real version of the platform OS, and simulate a very realistic device. For someone who doesn't have an actual device to test with, this is a great way to quickly verify different devices work with your app.

You may need to reset the emulator sometimes to ensure that changes you make don't persist. For example, if you deny the app permission for geolocation, you will likely have to find the settings to allow it later, or you can reset the emulator to remove any settings.

Conclusion

In this tutorial we looked at how to use Cordova to create an app that leverages the hardware capabilities and sensors. Using ngCordova, it is very easy to access this information in your Ionic apps. You now have access to device features like the camera, geolocation, finger print readers, and the calendar. When possible you should leverage the ngCordova library to make it easier to integrate, but there are other Cordova plugins not exposed by ngCordova.

This also concludes the Getting Started with Ionic series. You should now have a grasp of the basics and be able to move ahead with Ionic. If you're interested in more, check out some of our other courses and tutorials on the ionic framework.


2016-10-10T12:20:50.000Z2016-10-10T12:20:50.000ZJeremy Wilken

Host a Parse SDK Backend for Your iOS App on back{4}app

$
0
0
Final product image
What You'll Be Creating

About Parse SDK and back{4}app

You may have read that Facebook is shutting down Parse, but don't worry that the Parse SDK will die. Facebook will retire the parse.com hosting service in January 2017, but the Parse SDK has been made open source . This means that the Parse SDK is here to stay! Parse SDK has lots of great developers working on it, and there are a number of brand new websites that offer backend hosting as a service.

One of the best is back{4}app. It offers the following features with a free account:

  • 10 requests / second
  • 50 K requests / month
  • 5 GB file storage
  • 1 GB database storage
  • 1 cloud code job

Pretty nice, right? Check out their pricing table for more options.

Create a Free Account on back{4}app

Let's start by simply creating a free account on back{4}app. First, go to their website and create a new account. After you've successfully signed up, you'll be redirected to the dashboard. Click the green Build new Parse app button and you'll be redirected to the screen where you can type a name for your app:

Create a new Parse App

The last option is about making your app's API public so any other developer can access it if you share your App ID with them. This may be useful if you hire somebody to work on your Parse Dashboard without giving him your Login credentials of your back{4}app account. Anyway, I usually leave it unchecked.

Then press the blue NEXT button to access your app's keys.

Parse App info screen


In the window shown above you can find all the keys you need to setup your own project, whether it's an iOS or Android app, a Javascript project, or something else. If you're an iOS developer, all you need to copy is the App ID and Client ID strings and paste them into your code, in the Parse init method in AppDelegate.swift.

From the info screen, you may also delete your app from that info screen, or go back to the main page for your app where can access the Parse Dashboard. Click on the Parse Dashboard button to enter your dashboard where you can add classes and rows easily as easy as if you were working with an Excel file.

Parse App main page on back4app


The Parse Dashboard

Now that your app is set up on back{4}app, you can start testing with no worries about incurring fees, thanks to the free account tier.

Parse Dashboard window

The Parse Dashboard of a brand new app is empty and it shows only the pre-made User class with its primary columns: objectId, createdAt, updatedAt, ACL, username, password, email, and emailVerified.

If you want to add a custom column to this user class just click on the dark Add a new column button in the top right of the window. If you want to add a row instead, you can either click on the blue Add a row button or use Edit -> Add a row. Try adding a new row and enter something in the usernamepassword, and email fields.

Inserting data in the Parse Dashboard

You've just created a new user with the User class. You'll see that the objectIdcreatedAt, updatedAt and ACL fields got filled automatically. Please note that ACL is for Access Control List, where you can set the Read and Write properties for the public and for the selected user.

Access Control List


If you want to create a new class, just click on Create class button in the sidebar on the left. Let's try to create a class called Products, of type Custom.

Create a new Custom class

This time you'll get a screen with only objectIdcreatedAtupdatedAt, and ACL columns—these are the basic fields for every class. You can add your own columns to define your custom class datatype.

Conclusion

If you're writing an app that works with Parse SDK, you can also have it create the necessary classes, columns and rows in code. For example, my AskIt app template on CodeCanyon makes it easy to get set up with a Parse backend for your next iOS app. All you need to do is configure it with your back{4}app credentials, and the template will do the rest.

If you'd like to learn more about Parse SDK, check out some of our other courses and tutorials.


2016-10-11T23:27:24.000Z2016-10-11T23:27:24.000ZFrancesco Franchini

Host a Parse SDK Backend for Your iOS App on back{4}app

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-27206
Final product image
What You'll Be Creating

About Parse SDK and back{4}app

You may have read that Facebook is shutting down Parse, but don't worry that the Parse SDK will die. Facebook will retire the parse.com hosting service in January 2017, but the Parse SDK has been made open source . This means that the Parse SDK is here to stay! Parse SDK has lots of great developers working on it, and there are a number of brand new websites that offer backend hosting as a service.

One of the best is back{4}app. It offers the following features with a free account:

  • 10 requests / second
  • 50 K requests / month
  • 5 GB file storage
  • 1 GB database storage
  • 1 cloud code job

Pretty nice, right? Check out their pricing table for more options.

Create a Free Account on back{4}app

Let's start by simply creating a free account on back{4}app. First, go to their website and create a new account. After you've successfully signed up, you'll be redirected to the dashboard. Click the green Build new Parse app button and you'll be redirected to the screen where you can type a name for your app:

Create a new Parse App

The last option is about making your app's API public so any other developer can access it if you share your App ID with them. This may be useful if you hire somebody to work on your Parse Dashboard without giving him your Login credentials of your back{4}app account. Anyway, I usually leave it unchecked.

Then press the blue NEXT button to access your app's keys.

Parse App info screen


In the window shown above you can find all the keys you need to setup your own project, whether it's an iOS or Android app, a Javascript project, or something else. If you're an iOS developer, all you need to copy is the App ID and Client ID strings and paste them into your code, in the Parse init method in AppDelegate.swift.

From the info screen, you may also delete your app from that info screen, or go back to the main page for your app where can access the Parse Dashboard. Click on the Parse Dashboard button to enter your dashboard where you can add classes and rows easily as easy as if you were working with an Excel file.

Parse App main page on back4app


The Parse Dashboard

Now that your app is set up on back{4}app, you can start testing with no worries about incurring fees, thanks to the free account tier.

Parse Dashboard window

The Parse Dashboard of a brand new app is empty and it shows only the pre-made User class with its primary columns: objectId, createdAt, updatedAt, ACL, username, password, email, and emailVerified.

If you want to add a custom column to this user class just click on the dark Add a new column button in the top right of the window. If you want to add a row instead, you can either click on the blue Add a row button or use Edit -> Add a row. Try adding a new row and enter something in the usernamepassword, and email fields.

Inserting data in the Parse Dashboard

You've just created a new user with the User class. You'll see that the objectIdcreatedAt, updatedAt and ACL fields got filled automatically. Please note that ACL is for Access Control List, where you can set the Read and Write properties for the public and for the selected user.

Access Control List


If you want to create a new class, just click on Create class button in the sidebar on the left. Let's try to create a class called Products, of type Custom.

Create a new Custom class

This time you'll get a screen with only objectIdcreatedAtupdatedAt, and ACL columns—these are the basic fields for every class. You can add your own columns to define your custom class datatype.

Conclusion

If you're writing an app that works with Parse SDK, you can also have it create the necessary classes, columns and rows in code. For example, my AskIt app template on CodeCanyon makes it easy to get set up with a Parse backend for your next iOS app. All you need to do is configure it with your back{4}app credentials, and the template will do the rest.

If you'd like to learn more about Parse SDK, check out some of our other courses and tutorials.


2016-10-11T23:27:24.000Z2016-10-11T23:27:24.000ZFrancesco Franchini

Concurrency on Android with Service

$
0
0

In this tutorial we’ll explore the Service component and its superclass, the IntentService. You'll learn when and how to use this component to create great concurrency solutions for long-running background operations. We’ll also take quick look at IPC (Inter Process Communication), to learn how to communicate with services running on different processes.

To follow this tutorial you'll need some understanding of concurrency on Android. If you don’t know much about it, you might want to read some of our other articles about the topic first.

1. The Service Component

The Service component is a very important part of Android's concurrency framework. It fulfills the need to perform a long-running operation within an application, or it supplies some functionality for other applications. In this tutorial we’ll concentrate exclusively on Service’s long-running task capability, and how to use this power to improve concurrency.

What is a Service?

Service is a simple component that's instantiated by the system to do some long-running work that doesn't necessarily depend on user interaction. It can be independent from the activity life cycle and can also run on a complete different process.

Before diving into a discussion of what a Service represents, it's important to stress that even though services are commonly used for long-running background operations and to execute tasks on different processes, a Service doesn't represent a Thread or a process. It will only run in a background thread or on a different process if it's explicitly asked to do so.

A Service has two main features:

  • A facility for the application to tell the system about something it wants to be doing in the background.
  • A facility for an application to expose some of its functionality to other applications.

Services and Threads

There is a lot of confusion about services and threads. When a Service is declared, it doesn't contain a Thread. As a matter of fact, by default it runs directly on the main thread and any work done on it may potentially freeze an application. (Unless it's a IntentService, a Service subclass that already comes with a worker thread configured.)

So, how do services offer a concurrency solution? Well, a Service doesn't contain a thread by default, but it can be easily configured to work with its own thread or with a pool of threads. We'll see more about that below.

Disregarding the lack of a built-in thread, a Service is an excellent solution for concurrency problems in certain situations. The main reasons to choose a Service over other concurrency solutions like AsyncTask or the HaMeR framework are:

  • A Service can be independent of activity life cycles.
  • A Service is appropriate for running long operations.
  • Services don't depend on user interaction.
  • When running on different processes, Android can try to keep services alive even when the system is short on resources.
  • A Service can be restarted to resume its work.

Service Types

There are two types of Service, started and bound.

started service is launched via Context.startService(). Generally it performs only one operation and it will run indefinitely until the operation ends, then it shuts itself down. Typically, it doesn't return any result to the user interface.

The bound service is launched via Context.bindService(), and it allows a two-way communication between client and Service. It can also connect with multiple clients. It destroys itself when there isn't any client connected to it.

To choose between those two types, the  Service must implement some callbacks: onStartCommand() to run as a started service, and onBind() to run as a bound service. A Service may choose to implement only one of those types, but it can also adopt both at the same time without any problems. 

2. Service Implementation

To use a service, extend the Service class and override its callback methods, according to the type of Service . As mentioned before, for started services the onStartCommand() method must be implemented and for bound services, the onBind() method. Actually, the onBind()method must be declared for either service type, but it can return null for started services.

  • onStartCommand(): launched by Context.startService(). This is usually called from an activity. Once called, the service may run indefinitely and it's up to you to stop it, either calling stopSelf() or stopService().
  • onBind(): called when a component wants to connect to the service. Called on the system by Context.bindService(). It returns an IBinder that provides an interface to communicate with the client.

The service's life cycle is also important to take into consideration. The onCreate()  and onDestroy() methods should be implemented to initialize and shut down any resources or operations of the service.

Declaring a Service on Manifest

The Service component must be declared on the manifest with the <service> element. In this declaration it's also possible, but not obligatory, to set a different process for the Service to run in.

2.2. Working with Started Services

To initiate a started service you must call Context.startService() method. The Intent must be created with the Context and the Service class. Any relevant information or data should also be passed in this Intent.

In your Service class, the method that you should be concerned about is the onStartCommand(). It's on this method that you should call any operation that you want to execute on the started service. You'll process the Intent to capture information sent by the client. The startId represents an unique ID, automatically created for this specific request and the flags can also contain extra information about it.

The onStartCommand() returns a constant int that controls the behavior:

  • Service.START_STICKY: Service is restarted if it gets terminated.
  • Service.START_NOT_STICKY: Service is not restarted.
  • Service.START_REDELIVER_INTENT: The service is restarted after a crash and the intents then processing will be redelivered.

As mentioned before, a started service needs to be stopped, otherwise it will run indefinitely. This can be done either by the Service calling stopSelf() on itself or by a client calling stopService() on it.

Binding to Services

Components can create connections with services, establishing a two-way communication with them. The client must call Context.bindService(), passing an Intent, a ServiceConnection interface and a flag as parameters. A Service can be bound to multiple clients and it will be destroyed once it has no clients connected to it.

It's possible to send Message objects to services. To do it you'll need to create a Messenger on the client side in a ServiceConnection.onServiceConnected interface implementation and use it to send Message objects to the Service.

It's also possible to pass a response Messenger to the Service for the client to receive messages. Watch out though, because the client may not no longer be around to receive the service's message. You could also use BroadcastReceiver or any other broadcast solution.

It's important to unbind from the Service when the client is being destroyed.

On the Service side, you must implement the Service.onBind() method, providing an IBinder provided from a Messenger. This will relay a response Handler to handle the Message objects received from client.

3 Concurrency Using Services

Finally, it's time to talk about how to solve concurrency problems using services. As mentioned before, a standard Service doesn't contain any extra threads and it will run on the main Thread by default. To overcome this problem you must add an worker Thread, a pool of threads or execute the Service on a different process. You could also use a subclass of Service called IntentService that already contains a Thread.

Making a Service Run on a Worker Thread

To make the Service execute on a background Thread you could just create an extra Thread and run the job there. However Android offers us a better solution. One way to take the best advantage of the system is to implement the HaMeR framework inside the Service, for example by looping a Thread with a message queue that can process messages indefinitely.

It's important to understand that this implementation will process tasks sequentially. If you need to receive and process multiple tasks at the same time, you should use a pool of threads. Using thread pools is out of the scope of this tutorial and we won't talk about it today. 

To use HaMeR you must provide the Service with a Looper, a Handler and a HandlerThread.

If the HaMeR framework is unfamiliar to you, read our tutorials on HaMer for Android concurrency.

The IntentService

If there is no need for the Service to be kept alive for a long time, you could use IntentService, a Service subclass that's ready to run tasks on background threads. Internally, IntentService is a Service with a very similar implementation to the one proposed above. 

To use this class, all you have to do is extend it and implement the onHandleIntent(), a hook method that will be called every time a client calls startService() on this Service. It's important to keep in mind that the IntentService will stop as soon as its job is completed.

IPC (Inter Process Communication)

A Service can run on a completely different Process, independently from all tasks that are happening on the main process. A process has its own memory allocation, thread group, and processing priorities. This approach can be really useful when you need to work independently from the main process.

Communication between different processes is called IPC (Inter Process Communication). In a Service there are two main ways to do IPC: using a Messenger or implementing an AIDL interface. 

We've learned how to send and receive messages between services. All that you have to do is use create a Messenger using the IBinder instance received during the connection process and use it to send a reply Messenger back to the Service.

The AIDL interface is a very powerful solution that allows direct calls on Service methods running on different processes and it's appropriate to use when your Service is really complex. However, AIDL is complicated to implement and it's rarely used, so its use won't be discussed in this tutorial.

4. Conclusion

Services can be simple or complex. It depends on the needs of your application. I tried to cover as much ground as possible on this tutorial, however I've focused just on using services for concurrency purposes and there are more possibilities for this component. I you want to study more, take a look at the documentation and Android guides

See you soon!

2016-10-14T23:30:57.000Z2016-10-14T23:30:57.000ZTin Megali

Concurrency on Android with Service

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-27277

In this tutorial we’ll explore the Service component and its superclass, the IntentService. You'll learn when and how to use this component to create great concurrency solutions for long-running background operations. We’ll also take quick look at IPC (Inter Process Communication), to learn how to communicate with services running on different processes.

To follow this tutorial you'll need some understanding of concurrency on Android. If you don’t know much about it, you might want to read some of our other articles about the topic first.

1. The Service Component

The Service component is a very important part of Android's concurrency framework. It fulfills the need to perform a long-running operation within an application, or it supplies some functionality for other applications. In this tutorial we’ll concentrate exclusively on Service’s long-running task capability, and how to use this power to improve concurrency.

What is a Service?

Service is a simple component that's instantiated by the system to do some long-running work that doesn't necessarily depend on user interaction. It can be independent from the activity life cycle and can also run on a complete different process.

Before diving into a discussion of what a Service represents, it's important to stress that even though services are commonly used for long-running background operations and to execute tasks on different processes, a Service doesn't represent a Thread or a process. It will only run in a background thread or on a different process if it's explicitly asked to do so.

A Service has two main features:

  • A facility for the application to tell the system about something it wants to be doing in the background.
  • A facility for an application to expose some of its functionality to other applications.

Services and Threads

There is a lot of confusion about services and threads. When a Service is declared, it doesn't contain a Thread. As a matter of fact, by default it runs directly on the main thread and any work done on it may potentially freeze an application. (Unless it's a IntentService, a Service subclass that already comes with a worker thread configured.)

So, how do services offer a concurrency solution? Well, a Service doesn't contain a thread by default, but it can be easily configured to work with its own thread or with a pool of threads. We'll see more about that below.

Disregarding the lack of a built-in thread, a Service is an excellent solution for concurrency problems in certain situations. The main reasons to choose a Service over other concurrency solutions like AsyncTask or the HaMeR framework are:

  • A Service can be independent of activity life cycles.
  • A Service is appropriate for running long operations.
  • Services don't depend on user interaction.
  • When running on different processes, Android can try to keep services alive even when the system is short on resources.
  • A Service can be restarted to resume its work.

Service Types

There are two types of Service, started and bound.

started service is launched via Context.startService(). Generally it performs only one operation and it will run indefinitely until the operation ends, then it shuts itself down. Typically, it doesn't return any result to the user interface.

The bound service is launched via Context.bindService(), and it allows a two-way communication between client and Service. It can also connect with multiple clients. It destroys itself when there isn't any client connected to it.

To choose between those two types, the  Service must implement some callbacks: onStartCommand() to run as a started service, and onBind() to run as a bound service. A Service may choose to implement only one of those types, but it can also adopt both at the same time without any problems. 

2. Service Implementation

To use a service, extend the Service class and override its callback methods, according to the type of Service . As mentioned before, for started services the onStartCommand() method must be implemented and for bound services, the onBind() method. Actually, the onBind()method must be declared for either service type, but it can return null for started services.

  • onStartCommand(): launched by Context.startService(). This is usually called from an activity. Once called, the service may run indefinitely and it's up to you to stop it, either calling stopSelf() or stopService().
  • onBind(): called when a component wants to connect to the service. Called on the system by Context.bindService(). It returns an IBinder that provides an interface to communicate with the client.

The service's life cycle is also important to take into consideration. The onCreate()  and onDestroy() methods should be implemented to initialize and shut down any resources or operations of the service.

Declaring a Service on Manifest

The Service component must be declared on the manifest with the <service> element. In this declaration it's also possible, but not obligatory, to set a different process for the Service to run in.

2.2. Working with Started Services

To initiate a started service you must call Context.startService() method. The Intent must be created with the Context and the Service class. Any relevant information or data should also be passed in this Intent.

In your Service class, the method that you should be concerned about is the onStartCommand(). It's on this method that you should call any operation that you want to execute on the started service. You'll process the Intent to capture information sent by the client. The startId represents an unique ID, automatically created for this specific request and the flags can also contain extra information about it.

The onStartCommand() returns a constant int that controls the behavior:

  • Service.START_STICKY: Service is restarted if it gets terminated.
  • Service.START_NOT_STICKY: Service is not restarted.
  • Service.START_REDELIVER_INTENT: The service is restarted after a crash and the intents then processing will be redelivered.

As mentioned before, a started service needs to be stopped, otherwise it will run indefinitely. This can be done either by the Service calling stopSelf() on itself or by a client calling stopService() on it.

Binding to Services

Components can create connections with services, establishing a two-way communication with them. The client must call Context.bindService(), passing an Intent, a ServiceConnection interface and a flag as parameters. A Service can be bound to multiple clients and it will be destroyed once it has no clients connected to it.

It's possible to send Message objects to services. To do it you'll need to create a Messenger on the client side in a ServiceConnection.onServiceConnected interface implementation and use it to send Message objects to the Service.

It's also possible to pass a response Messenger to the Service for the client to receive messages. Watch out though, because the client may not no longer be around to receive the service's message. You could also use BroadcastReceiver or any other broadcast solution.

It's important to unbind from the Service when the client is being destroyed.

On the Service side, you must implement the Service.onBind() method, providing an IBinder provided from a Messenger. This will relay a response Handler to handle the Message objects received from client.

3 Concurrency Using Services

Finally, it's time to talk about how to solve concurrency problems using services. As mentioned before, a standard Service doesn't contain any extra threads and it will run on the main Thread by default. To overcome this problem you must add an worker Thread, a pool of threads or execute the Service on a different process. You could also use a subclass of Service called IntentService that already contains a Thread.

Making a Service Run on a Worker Thread

To make the Service execute on a background Thread you could just create an extra Thread and run the job there. However Android offers us a better solution. One way to take the best advantage of the system is to implement the HaMeR framework inside the Service, for example by looping a Thread with a message queue that can process messages indefinitely.

It's important to understand that this implementation will process tasks sequentially. If you need to receive and process multiple tasks at the same time, you should use a pool of threads. Using thread pools is out of the scope of this tutorial and we won't talk about it today. 

To use HaMeR you must provide the Service with a Looper, a Handler and a HandlerThread.

If the HaMeR framework is unfamiliar to you, read our tutorials on HaMer for Android concurrency.

The IntentService

If there is no need for the Service to be kept alive for a long time, you could use IntentService, a Service subclass that's ready to run tasks on background threads. Internally, IntentService is a Service with a very similar implementation to the one proposed above. 

To use this class, all you have to do is extend it and implement the onHandleIntent(), a hook method that will be called every time a client calls startService() on this Service. It's important to keep in mind that the IntentService will stop as soon as its job is completed.

IPC (Inter Process Communication)

A Service can run on a completely different Process, independently from all tasks that are happening on the main process. A process has its own memory allocation, thread group, and processing priorities. This approach can be really useful when you need to work independently from the main process.

Communication between different processes is called IPC (Inter Process Communication). In a Service there are two main ways to do IPC: using a Messenger or implementing an AIDL interface. 

We've learned how to send and receive messages between services. All that you have to do is use create a Messenger using the IBinder instance received during the connection process and use it to send a reply Messenger back to the Service.

The AIDL interface is a very powerful solution that allows direct calls on Service methods running on different processes and it's appropriate to use when your Service is really complex. However, AIDL is complicated to implement and it's rarely used, so its use won't be discussed in this tutorial.

4. Conclusion

Services can be simple or complex. It depends on the needs of your application. I tried to cover as much ground as possible on this tutorial, however I've focused just on using services for concurrency purposes and there are more possibilities for this component. I you want to study more, take a look at the documentation and Android guides

See you soon!

2016-10-14T23:30:57.000Z2016-10-14T23:30:57.000ZTin Megali

Get Started With React Native Layouts

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-27418

In this tutorial, you'll learn how to lay out React Native apps and how to implement layouts commonly used in apps. This includes the Stack Layout, Grid Layout, and Absolute Layout. I'll be assuming that you already know the basics of styling a React Native app and how to use CSS in general, so I won't dwell too much on StyleSheet.create and how to add styling to different elements.

You can find the full source code for this tutorial on GitHub.

Project Setup

To make things easy, we'll use React Native for Web. With the React Native for Web Starter, we can easily spin up a new React Native project that can run in the browser. This code is 100% compatible with the React Native project. We'll create a separate component for each layout that we'll implement so you can easily import them into a normal React Native project if you want. We're just using React Native for Web because it's easier to get it up and running. 

You can execute the following commands to set up the project:

Once it's done installing, navigate inside the app/components directory. This is where the files are that we'll primarily work on. 

Open the App.js file and replace the default code with the following:

Later on, you can import the components that we'll be creating and then render them from this file. Just remember that any component that we save inside the layouts directory shouldn't be rendered with anything else. For example, if we have layouts/StackLayout.js, do the following in App.js:

You can serve the project by executing the following command:

This allows you to access it in the browser by visiting http://localhost:3000. A full page reload will be triggered if you make a change to any of the files that are currently imported from the App.js file.

How to Create Different Layouts

Layouts in React Native use a subset of Flexbox. (I say "subset" because not all features that are in the Flexbox specification are included.) So if you already know Flexbox, then you can readily apply those skills in React Native. It's also worth noting that there are no floats or percentage-based units in React Native. This means that we can only do layouts using Flexbox and CSS positioning.

Stack Layout

The first kind of layout that we will implement is the Stack Layout. For vertical orientation, it stacks elements on top of each other, while for horizontal orientation, the elements are placed side by side. Let's take a look at vertical orientation first:

Vertical Stack Layout

Here's the code to accomplish the layout above:

Breaking down the code above, we first get the height of the available space for the app to consume. Then we calculate what the height of each box will be. Since we have three boxes, we divide it by three.

For the markup, the boxes should be wrapped inside a container. Common styles are declared in the box object, and unique background colors are applied to uniquely named objects (box1, box2, box3):

To use Flexbox, you must use the flex property on the container. The value is the amount of space it will consume. If it's 1, it means that it will consume all the available space, provided that the element has no siblings. We'll take a look at an example of using flex with siblings later on. 

flexDirection allows you to specify the primary axis of the layout. By default, this is set to column. Setting flexDirection to column means that the children of the container will be laid out vertically (stacked on top of each other) while setting it to row means that the children will be laid out horizontally (side by side). To achieve equal height, set the height of the box to that of the value that we calculated earlier.

Here's an image to help you visualize how the content will flow based on the flexDirection that you specified.

Illustration of flexDirection row and column

The method I just showed you is the manual way of doing things. Using the Dimensions to compute the width or height of the elements will fail if your app supports both portrait and landscape device orientation. That's because as soon as the user flips their device, the width or height that you computed earlier will be wrong. React Native won't automatically recompute it for you, so the app ends up looking weird.

Flexbox can actually do the computation for you if you just supply the correct values. To achieve the same layout as above without using the Dimensions, all you have to do is specify flex: 1 for all the boxes instead of specifying the height:

This is now an example of using flex with siblings. Now we have three siblings with the same flex value. This means that all three of them will equally share the available space since the flex value is the same. (You can actually use any flex value as long as the child elements all have the same value.)

Using this knowledge, you can now achieve layouts with a header, content, and a footer:

Here's what it will look like:

Stack Layout header content footer

Note that this will be static. So if your main content becomes higher than the maximum available height, then the rest of your content will be hidden. If you expect your content to go over that limit, you can use the built-in ScrollView component to automatically generate a vertical scrollbar just like in web pages. 

Horizontal Stack Layouts

To implement horizontal stack layouts, all you have to do is change the flexDirection to row.

If we change the box flex value back to 1, this results in the following output:

The only thing we changed is the flexDirection, which is now set to row. Since the boxes are all set to flex: 1, they will have the same width and height. All the ideas from the vertical stack layout are equally applicable to this one.

Justify Content 

If you want to control the distribution of children within a container, you use the justifyContent property on the container. 

Below are the five possible values that can be used with this property. In the following examples, the height of each of the children is diminished to demonstrate how each would look. You wouldn't be able to see any difference if the flex value was 1 for each of the children, because they would end up consuming all the available space.

  • flex-start: child elements are aligned toward the starting point. Notice the white background right below the last child. That is how you know that this is using flex-start because all the children are aligned towards the starting point. This leaves an empty space towards the end.
Flex Start
  • flex-end: child elements are aligned toward the end line. Notice that this time the empty space is at the starting point.
  • center: child elements are placed towards the center. This time the empty space is equally divided between the starting and ending point.
Flex Center
  • space-around: child elements are distributed such that there would be equal space around each of them. This means that the elements in the outer part would have less space on their outer side and the space between the two children is doubled.
Flex Space Around
  • space-between: child elements are distributed such that there would be an equal amount of space between each of them. 
Flex Space Between

As you may have noticed, each of these style properties is dependent on the height or width of the child elements. It's dependent on the width if the flexDirection is row, and on the height if the flexDirection is column

For example, space-between won't really have any effect on a vertical stack layout if each of the child elements is using flex to control the height. This is because there will be no more space left for the gap between each child element to consume. 

Align Items

At first glance, justifyContent and alignItems might look as if they're doing the same thing. They also share three possible values: flex-start, flex-end, and center, with the addition of a stretch value. 

The main difference between justifyContent and alignItems is the axis on which the children are distributed. As you have seen earlier, justifyContent always uses the primary axis when distributing child elements. But alignItems uses the axis opposite to the primary one. 

We already know that the axis is determined by the flexDirection that has been set. So if the flexDirection is row, the primary axis flows from left to right. This means that the cross axis will flow from top to bottom. On the other hand, if flexDirection is column then the cross axis will flow from left to right.

Below are some examples of justifyContent and alignItems implemented side by side with the flexDirection of row. The first one uses justifyContent while the second uses alignItems.

  • flex-start: the positioning of the elements is the same, which is why the alignItems implementation looks exactly like justifyContent.
justifyContent and alignItems flex-start
  • flex-end: now we start to see a difference. In the first instance, it's at the end of the line of the first row, while the second instance appears to be at the starting line of the last row. 
justifyContent and alignItems flex-end
  • centercenter has the same idea as the rest of the values that we've used so far. In the first instance, the items are centered on the x-axis while in the second, the items are centered on the y-axis.

justifyContent and alignItems center

  • stretch: use this to have the child elements stretch to fill the container. This is the default value for alignItems, so specifying this value is optional. You've already seen how this works when we implemented vertical and horizontal stack layouts.

Here's the code used in the examples above. Just play with the values for the flexDirection, justifyContent and alignItems if you want to see how they look:

If you want to specify the alignment of individual elements within a container, you can use the alignSelf property. All the possible values for align-items are applicable to this property as well. So, for example, you can align a single element to the right of its container, while all the rest are aligned to the left.

Grid Layout

React Native doesn't really come with a grid layout system, but Flexbox is flexible enough to create one. By using the things we learned so far, we can recreate Grid layouts using Flexbox. Here's an example:

Grid Layout

And here's the code that creates that layout:

From the code above, you can see that we're emulating what they usually do in CSS grid frameworks. Each row is wrapped in a separate view, and the grid items are inside it. A default flex value of 1 is applied to each item so that they will equally share the space available on each row. But for items that need to consume larger space, a higher flex value is applied. This automatically adjusts the width of the other items so it accommodates all the items.

If you want to add spaces between each item in a row, you can add a padding to each of them and then create a box inside each one.

This results in the following output:

Grid Layout With Spaces

Absolute Layout

React Native only supports absolute and relative positioning. This shouldn't limit you, though, because you can always combine these with Flexbox to position the different elements anywhere you want.

Let's look at how we would accomplish the following:


We can achieve this easily if we have full command over the positioning values that are available in the browser. But since we're in React Native, we need to think of this the Flexbox way first and then use CSS positioning for the small boxes. 

Using Flexbox, this can be achieved in two ways. You can either use row or column for the flexDirection for the main container. How you arrange the different elements will depend on which method you choose. Here we're going to use row for the flexDirection so the screen will be divided into three columns. The first column will contain the orange box, the second column will contain the black, gray and green boxes, and the third will contain the blue and small purple boxes.

If you already know how each of the elements will be laid out, it's only a matter of applying the things we learned so far. After all, we don't really need to apply CSS positioning on the big boxes, only the small ones. 

The first column only has the orange box, so applying justifyContent: 'center' to its container should do the trick. In case you've already forgotten, flexDirection defaults to column. This means that if you set justifyContent to center, the children will be aligned on the center of the Y-axis. 

The second column has basically the same idea as the first one, only this time we don't want to align all the boxes to the center. What we want is for them to have equal spaces in between each other, and justifyContent: 'space-between' gets that job done. But at the same time we also want to center all the children on the X-axis so we use alignItems: 'center'

The only tricky part here is that you shouldn't apply any width property to the gray box because we want it to stretch all the way to consume the full width of its parent. Since we didn't apply any width, we should apply alignSelf: 'stretch' to the gray box so that it will consume the full width of its parent. 

Next, to position the small red box slightly away from its relative position, we use position: relative and then apply top and left values because its relative position is around the upper-left corner of its parent. 

As for the small orange box, we use position: 'absolute' because we need to align it to the upper right corner of its parent. This works because absolutely positioned elements in React Native are bound to their parent.

The third column basically applies the same idea so I'm no longer going to explain it.

Next, let's try to implement a fixed header and footer layout. This is commonly found in apps that have a tab navigation; the tabs are fixed at the bottom of the screen while the main content can be scrolled. 

For us to accomplish this, we need to use the ScrollView component so that if the main content goes over the height of the container, React Native will automatically generate a vertical scrollbar. This allows us to add marginTop and marginBottom to the main content container so that the fixed header and footer won't obstruct the main content. Also, note that the left and right values of the header and footer are set to 0 so that they will consume the full device width. 

Here's how it will look:

Fixed header and footer

Third-Party Libraries

React Native has a big community behind it, so there's no wonder that a few libraries have already been created to ease the implementation of layouts. In this section, I'll introduce you to a library called React Native Easy Grid. You can use it to describe how you want to lay out your app by making use of the Grid, Row, and Col components.

You can install it with the following command: 

Import the library and extract the different components in your file.

The Grid component is used for wrapping everything. Col is used to create a column, and Row is used to create rows. You can specify a size property for both Row and Col, though we only used it on the Row below. If the size isn't specified, it will equally divide the available space between the Col instances. 

In this case, there are only two, so the whole screen is divided into two columns. The first column is then divided into two rows. Here we specified a size, but you can actually skip it if you just need equally sized rows, as we did below.

Once that's done, all you have to do is add the styling for the rows and columns:

As you have noticed, React Native Easy Grid has a very intuitive API. 

Conclusion

In this tutorial, you learned how to lay out React Native apps. Specifically, you learned how to use React Native's Flexbox to position things around. You also learned how to use React Native Easy Grid, which makes Flexbox implementation easier. 

In an upcoming tutorial, we'll put everything you learned into practice by recreating UI elements that are commonly found in apps: things like the calendar, lists, and tab navigation.

2016-10-26T14:59:48.000Z2016-10-26T14:59:48.000ZWernher-Bel Ancheta

How to Create an Android Chat App Using Firebase

$
0
0
Final product image
What You'll Be Creating

With Firebase, creating real-time social applications is a walk in the park. And the best thing about it: you don't have to write a single line of server-side code.

In this tutorial, I'll show you how to leverage Firebase UI to create a group chat app you can share with your friends. It's going to be a very simple app with just one chat room, which is open to all users.

As you might have guessed, the app will depend on Firebase Auth to manage user registration and sign in. It will also use Firebase's real-time database to store the group chat messages.

Prerequisites

To be able to follow this step-by-step tutorial, you'll need the following:

For instructions on how to set up a Firebase account and get ready for Firebase development in Android Studio, see my tutorial Get Started With Firebase for Android here on Envato Tuts+.

1. Create an Android Studio Project

Fire up Android Studio and create a new project with an empty activity called MainActivity.

Add empty activity

To configure the project to use the Firebase platform, open the Firebase Assistant window by clicking on Tools > Firebase.

While using the Firebase platform, it's usually a good idea to add Firebase Analytics to the project. Therefore, inside the Firebase Assistant window, go to the Analytics section and press Log an Analytics event.

Firebase Assistant

Next, press the Connect to Firebase button and make sure that the Create new Firebase project option is selected. Once the connection is established, press the Add Analytics to your app button.

Press Add analytics to your app

At this point, the Android Studio project is not only integrated with Firebase Analytics, it is also ready to use all other Firebase services.

2. Add Dependencies

We'll be using two libraries in this project: Firebase UI, and the Android design support library. Therefore, open the build.gradle file of the app module and add the following compile dependencies to it:

Press the Sync Now button to update the project.

3. Define Layouts

The activity_main.xml file, which is already bound to MainActivity, defines the contents of the home screen of the app. In other words, it will represent the chat room.

Like most other group chat apps available today, our app will have the following UI elements:

  • A list that displays all the group chat messages in a chronological order
  • An input field where the user can type in a new message
  • A button the user can press to post the message

Therefore, activity_main.xml must have a ListView, an EditText, and a FloatingActionButton. After placing them all inside a RelativeLayout widget, your layout XML should look like this:

Note that I've placed the EditText widget inside a TextInputLayout widget. Doing so adds a floating label to the EditText, which is important if you want to adhere to the guidelines of material design.

Now that the layout of the home screen is ready, we can move on to creating a layout for the chat messages, which will be items inside the ListView. Start by creating a new layout XML file called message.xml, whose root element is RelativeLayout.

The layout must have TextView widgets to display the chat message's text, the time it was sent, and its author. You are free to place them in any order. Here's the layout I'll be using:

4. Handle User Authentication

Allowing users to anonymously post messages to the chat room would be a very bad idea. It could lead to spam, security issues, and a less than ideal chatting experience for the users. Therefore, let us now configure our app such that only registered users can read and post messages.

Start by going to the Auth section of the Firebase Console and enabling Email/Password as a sign-in provider.

Feel free to enable OAuth 2.0 sign-in providers as well. However, FirebaseUI v0.6.0 seamlessly supports only Google Sign-In and Facebook Login.

Step 1: Handle User Sign-In

As soon as the app starts, it must check if the user is signed in. If so, the app should go ahead and display the contents of the chat room. Otherwise, it must redirect the user to either a sign-in screen, or a sign-up screen. With FirebaseUI, creating those screens takes a lot less code than you might imagine.

Inside the onCreate() method of MainActivity, check if the user is already signed in by checking if the current FirebaseUser object is not null. If it is null, you must create and configure an Intent object that opens a sign-in activity. To do so, use the SignInIntentBuilder class. Once the intent is ready, you must launch the sign-in activity using the startActivityForResult() method.

Note that the sign-in activity also allows new users to sign up. Therefore, you don't have write any extra code to handle user registration.

Add the following code to the onCreate() method:

As you can see in the above code, if the user is already signed in, we first display a Toast welcoming the user, and then call a method named displayChatMessages. For now, just create a stub for it. We'll be adding code to it later.

Once the user has signed in, MainActivity will receive a result in the form of an Intent. To handle it, you must override the onActivityResult() method.

If the result's code is RESULT_OK, it means the user has signed in successfully. If so, you must call the displayChatMessages() method again. Otherwise, call finish() to close the app.

At this point, you can run the app and take a look at the sign-in and sign-up screens.

Sign up screen for new users

Step 2: Handle User Sign-Out

By default, FirebaseUI uses Smart Lock for Passwords. Therefore, once the users sign in, they'll stay signed in even if the app is restarted. To allow the users to sign out, we'll now add a sign-out option to the overflow menu of MainActivity.

Create a new menu resource file called main_menu.xml and add a single item to it, whose title attribute is set to Sign out. The contents of the file should look like this:

To instantiate the menu resource inside MainActivity, override the onCreateOptionsMenu() method and call the inflate() method of the MenuInflater object.

Next, override the onOptionsItemSelected() method to handle click events on the menu item. Inside the method, you can call the signOut() method of the AuthUI class to sign the user out. Because the sign-out operation is executed asynchronously, we'll also add an OnCompleteListener to it.

Once the user has signed out, the app should close automatically. That's the reason why you see a call to the finish() method in the code above.

5. Create a Model

In order to store the chat messages in the Firebase real-time database, you must create a model for them. The layout of the chat message, which we created earlier in this tutorial, has three views. To be able to populate those views, the model too must have at least three fields.

Create a new Java class called ChatMessage.java and add three member variables to it: messageText, messageUser, and messageTime. Also add a constructor to initialize those variables.

To make the model compatible with FirebaseUI, you must also add a default constructor to it, along with getters and setters for all the member variables.

At this point, the ChatMessage class should look like this:

6. Post a Chat Message

Now that the model is ready, we can easily add new chat messages to the Firebase real-time database.

To post a new message, the user will press the FloatingActionButton. Therefore, you must add an OnClickListener to it.

Inside the listener, you must first get a DatabaseReference object using the getReference() method of the FirebaseDatabase class. You can then call the push() and setValue() methods to add new instances of the ChatMessage class to the real-time database.

The ChatMessage instances must, of course, be initialized using the contents of the EditText and the display name of the currently signed in user.

Accordingly, add the following code to the onCreate() method:

Data in the Firebase real-time database is always stored as key-value pairs. However, if you observe the code above, you'll see that we're calling setValue() without specifying any key. That's allowed only because the call to the setValue() method is preceded by a call to the push() method, which automatically generates a new key.

7. Display the Chat Messages

FirebaseUI has a very handy class called FirebaseListAdapter, which dramatically reduces the effort required to populate a ListView using data present in the Firebase real-time database. We'll be using it now to fetch and display all the ChatMessage objects that are present in the database.

Add a FirebaseListAdapter object as a new member variable of the MainActivity class.

Inside the displayChatMessages() method, initialize the adapter using its constructor, which expects the following arguments:

  • A reference to the Activity
  • The class of the object you're interested in
  • The layout of the list items
  • A DatabaseReference object

FirebaseListAdapter is an abstract class and has an abstract populateView() method, which must be overridden.

As its name suggests, populateView() is used to populate the views of each list item. If you are familiar with the ArrayAdapter class, you can think of populateView() as an alternative to the getView() method.

Inside the method, you must first use findViewById() to get references to each TextView that's present in the message.xml layout file. You can then call their setText() methods and populate them using the getters of the ChatMessage class.

At this point, the contents of the displayChatMessages() method should like this:

The group chat app is ready. Run it and a post new messages to see them pop up immediately in the ListView. If you share the app with your friends, you should be able to see their messages too as soon as they post them.

Conclusion

In this tutorial, you learned how to use Firebase and FirebaseUI to create a very simple group chat application. You also saw how easy it is to work with the classes available in FirebaseUI to quickly create new screens and implement complex functionality.

To learn more about Firebase and FirebaseUI, do refer to the official documentation. Or check out some of our other Firebase tutorials here on Envato Tuts+!


2016-10-27T23:18:56.000Z2016-10-27T23:18:56.000ZAshraff Hathibelagal

How to Create an Android Chat App Using Firebase

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-27397
Final product image
What You'll Be Creating

With Firebase, creating real-time social applications is a walk in the park. And the best thing about it: you don't have to write a single line of server-side code.

In this tutorial, I'll show you how to leverage Firebase UI to create a group chat app you can share with your friends. It's going to be a very simple app with just one chat room, which is open to all users.

As you might have guessed, the app will depend on Firebase Auth to manage user registration and sign in. It will also use Firebase's real-time database to store the group chat messages.

Prerequisites

To be able to follow this step-by-step tutorial, you'll need the following:

For instructions on how to set up a Firebase account and get ready for Firebase development in Android Studio, see my tutorial Get Started With Firebase for Android here on Envato Tuts+.

1. Create an Android Studio Project

Fire up Android Studio and create a new project with an empty activity called MainActivity.

Add empty activity

To configure the project to use the Firebase platform, open the Firebase Assistant window by clicking on Tools > Firebase.

While using the Firebase platform, it's usually a good idea to add Firebase Analytics to the project. Therefore, inside the Firebase Assistant window, go to the Analytics section and press Log an Analytics event.

Firebase Assistant

Next, press the Connect to Firebase button and make sure that the Create new Firebase project option is selected. Once the connection is established, press the Add Analytics to your app button.

Press Add analytics to your app

At this point, the Android Studio project is not only integrated with Firebase Analytics, it is also ready to use all other Firebase services.

2. Add Dependencies

We'll be using two libraries in this project: Firebase UI, and the Android design support library. Therefore, open the build.gradle file of the app module and add the following compile dependencies to it:

Press the Sync Now button to update the project.

3. Define Layouts

The activity_main.xml file, which is already bound to MainActivity, defines the contents of the home screen of the app. In other words, it will represent the chat room.

Like most other group chat apps available today, our app will have the following UI elements:

  • A list that displays all the group chat messages in a chronological order
  • An input field where the user can type in a new message
  • A button the user can press to post the message

Therefore, activity_main.xml must have a ListView, an EditText, and a FloatingActionButton. After placing them all inside a RelativeLayout widget, your layout XML should look like this:

Note that I've placed the EditText widget inside a TextInputLayout widget. Doing so adds a floating label to the EditText, which is important if you want to adhere to the guidelines of material design.

Now that the layout of the home screen is ready, we can move on to creating a layout for the chat messages, which will be items inside the ListView. Start by creating a new layout XML file called message.xml, whose root element is RelativeLayout.

The layout must have TextView widgets to display the chat message's text, the time it was sent, and its author. You are free to place them in any order. Here's the layout I'll be using:

4. Handle User Authentication

Allowing users to anonymously post messages to the chat room would be a very bad idea. It could lead to spam, security issues, and a less than ideal chatting experience for the users. Therefore, let us now configure our app such that only registered users can read and post messages.

Start by going to the Auth section of the Firebase Console and enabling Email/Password as a sign-in provider.

Feel free to enable OAuth 2.0 sign-in providers as well. However, FirebaseUI v0.6.0 seamlessly supports only Google Sign-In and Facebook Login.

Step 1: Handle User Sign-In

As soon as the app starts, it must check if the user is signed in. If so, the app should go ahead and display the contents of the chat room. Otherwise, it must redirect the user to either a sign-in screen, or a sign-up screen. With FirebaseUI, creating those screens takes a lot less code than you might imagine.

Inside the onCreate() method of MainActivity, check if the user is already signed in by checking if the current FirebaseUser object is not null. If it is null, you must create and configure an Intent object that opens a sign-in activity. To do so, use the SignInIntentBuilder class. Once the intent is ready, you must launch the sign-in activity using the startActivityForResult() method.

Note that the sign-in activity also allows new users to sign up. Therefore, you don't have write any extra code to handle user registration.

Add the following code to the onCreate() method:

As you can see in the above code, if the user is already signed in, we first display a Toast welcoming the user, and then call a method named displayChatMessages. For now, just create a stub for it. We'll be adding code to it later.

Once the user has signed in, MainActivity will receive a result in the form of an Intent. To handle it, you must override the onActivityResult() method.

If the result's code is RESULT_OK, it means the user has signed in successfully. If so, you must call the displayChatMessages() method again. Otherwise, call finish() to close the app.

At this point, you can run the app and take a look at the sign-in and sign-up screens.

Sign up screen for new users

Step 2: Handle User Sign-Out

By default, FirebaseUI uses Smart Lock for Passwords. Therefore, once the users sign in, they'll stay signed in even if the app is restarted. To allow the users to sign out, we'll now add a sign-out option to the overflow menu of MainActivity.

Create a new menu resource file called main_menu.xml and add a single item to it, whose title attribute is set to Sign out. The contents of the file should look like this:

To instantiate the menu resource inside MainActivity, override the onCreateOptionsMenu() method and call the inflate() method of the MenuInflater object.

Next, override the onOptionsItemSelected() method to handle click events on the menu item. Inside the method, you can call the signOut() method of the AuthUI class to sign the user out. Because the sign-out operation is executed asynchronously, we'll also add an OnCompleteListener to it.

Once the user has signed out, the app should close automatically. That's the reason why you see a call to the finish() method in the code above.

5. Create a Model

In order to store the chat messages in the Firebase real-time database, you must create a model for them. The layout of the chat message, which we created earlier in this tutorial, has three views. To be able to populate those views, the model too must have at least three fields.

Create a new Java class called ChatMessage.java and add three member variables to it: messageText, messageUser, and messageTime. Also add a constructor to initialize those variables.

To make the model compatible with FirebaseUI, you must also add a default constructor to it, along with getters and setters for all the member variables.

At this point, the ChatMessage class should look like this:

6. Post a Chat Message

Now that the model is ready, we can easily add new chat messages to the Firebase real-time database.

To post a new message, the user will press the FloatingActionButton. Therefore, you must add an OnClickListener to it.

Inside the listener, you must first get a DatabaseReference object using the getReference() method of the FirebaseDatabase class. You can then call the push() and setValue() methods to add new instances of the ChatMessage class to the real-time database.

The ChatMessage instances must, of course, be initialized using the contents of the EditText and the display name of the currently signed in user.

Accordingly, add the following code to the onCreate() method:

Data in the Firebase real-time database is always stored as key-value pairs. However, if you observe the code above, you'll see that we're calling setValue() without specifying any key. That's allowed only because the call to the setValue() method is preceded by a call to the push() method, which automatically generates a new key.

7. Display the Chat Messages

FirebaseUI has a very handy class called FirebaseListAdapter, which dramatically reduces the effort required to populate a ListView using data present in the Firebase real-time database. We'll be using it now to fetch and display all the ChatMessage objects that are present in the database.

Add a FirebaseListAdapter object as a new member variable of the MainActivity class.

Inside the displayChatMessages() method, initialize the adapter using its constructor, which expects the following arguments:

  • A reference to the Activity
  • The class of the object you're interested in
  • The layout of the list items
  • A DatabaseReference object

FirebaseListAdapter is an abstract class and has an abstract populateView() method, which must be overridden.

As its name suggests, populateView() is used to populate the views of each list item. If you are familiar with the ArrayAdapter class, you can think of populateView() as an alternative to the getView() method.

Inside the method, you must first use findViewById() to get references to each TextView that's present in the message.xml layout file. You can then call their setText() methods and populate them using the getters of the ChatMessage class.

At this point, the contents of the displayChatMessages() method should like this:

The group chat app is ready. Run it and a post new messages to see them pop up immediately in the ListView. If you share the app with your friends, you should be able to see their messages too as soon as they post them.

Conclusion

In this tutorial, you learned how to use Firebase and FirebaseUI to create a very simple group chat application. You also saw how easy it is to work with the classes available in FirebaseUI to quickly create new screens and implement complex functionality.

To learn more about Firebase and FirebaseUI, do refer to the official documentation. Or check out some of our other Firebase tutorials here on Envato Tuts+!


2016-10-27T23:18:56.000Z2016-10-27T23:18:56.000ZAshraff Hathibelagal

Upgrade Your App to iOS 10

$
0
0

In this article, I would like to talk about iOS 10 and what you need to do to prepare your apps for iOS 10. 

As with every major release, iOS 10 introduces a slew of changes and enhancements. Some are required, others are recommended, and there are also a few changes that can improve your application's user experience. Let's start with an overview of what is required if you build your application against the iOS 10 SDK.

1. App Transport Security Is Coming

The most important change isn't strictly related to iOS 10, but it is important enough that I want to discuss it first. Even though App Transport Security (ATS) has been around since iOS 9, it has always been easy to opt out of ATS by adding the following snippet to your target's Info.plist.

But that is about to change. On 1 January 2017, every application submitted to the App Store, including updates of existing applications, will need to comply with the ATS guidelines. This means that your application is required to securely communicate with web services over HTTPS.

If you read my detailed discussion of App Transport Security, then you may remember that App Transport Security defines a set of rules. The servers your application communicates with need to comply with those rules. In other words, making network requests over HTTPS isn't enough. Each server your application talks to needs to be secured by and comply with modern security standards.

You can still define exception domains in the target's Info.plist, but it is no longer allowed to opt out of App Transport Security altogether.

Local Network Connections

I recently ran into a problem related to App Transport Security. The application of a client needed to communicate with other devices on the same network. It talks to other devices using their IP address, which isn't supported by App Transport Security exception domains. And to make things even more complicated, the IP address of a device isn't fixed. It can and will change over time.

Fortunately, as of iOS 10, it is possible to resolve this issue by adding an additional key-value pair to the NSAppTransportSecurity dictionary in the target's Info.plist. By setting the value of NSAllowsLocalNetworking to YES, it is possible to disable App Transport Security for local network traffic.

Other Options

If you've been struggling with App Transport Security in the past, then I recommend taking a look at the updated App Transport Security documentation. Apple has added a few additional keys that make working with ATS less of a headache.

For example, many applications load content from the web in a web view. Your application often doesn't know what websites the user is going to visit, which makes it impossible to define exception domains for App Transport Security in the target's Info.plist. As of iOS 10, you can disable App Transport Security for web views by setting NSAllowsArbitraryLoadsInWebContent to YES in the target's Info.plist.

App Transport Security Is Required

What you need to remember is that App Transport Security is required for every application submitted to the App Store after 31 December. Opting out of App Transport Security is no longer possible. Note that the App Store review team requires an explanation from you if you partially opt out of App Transport Security by using an exception, such as NSAllowsLocalNetworking or NSAllowsArbitraryLoadsInWebContent. You can read more about this in Apple's documentation.

2. Privacy

Apple continues to invest in protecting the privacy of its customers, and that commitment also has consequences for developers. What does that mean for you?

If your application accesses a system service or device capability that requires the user's explicit permission, the user sees a system alert in which the application asks for the user's permission. The content of that alert used to be provided by the operating system if your application didn't specify one. This has changed in iOS 10.

Apple Continues to Invest In Privacy and Security

As of iOS 10, your application needs to tell the user why it needs access to a particular system service or device capability. You do this by adding a key to the target's Info.plist. If your application is localized, then you also should provide a translation for the description in the InfoPlist.strings file.

Here is a complete list of the privacy keys available in iOS 10. Most of them should look familiar, but some are new in iOS 10, such as NSSiriUsageDescription and NSAppleMusicUsageDescription.

  • HealthKit
    • NSHealthShareUsageDescription
    • NSHealthUpdateUsageDescription
  • Location
    • NSLocationUsageDescription
    • NSLocationAlwaysUsageDescription
    • NSLocationWhenInUseUsageDescription
  • NSBluetoothPeripheralUsageDescription
  • NSCalendarsUsageDescription
  • NSVoIPUsageDescription
  • NSCameraUsageDescription
  • NSContactsUsageDescription
  • NSHomeKitUsageDescription
  • NSAppleMusicUsageDescription
  • NSMicrophoneUsageDescription
  • NSMotionUsageDescription
  • NSPhotoLibraryUsageDescription
  • NSRemindersUsageDescription
  • NSSpeechRecognitionUsageDescription
  • NSSiriUsageDescription
  • NSVideoSubscriberAccountUsageDescription

If you don't add a usage description for the system services and device capabilities your application uses, a warning is shown in the console, and the system alert that asks the user for permission isn't shown to the user. As a result, your application is denied access to that particular service or capability.

It goes without saying that the App Store review team rejects any applications that violate this policy. In fact, builds uploaded to the App Store that don't comply with this policy are automatically rejected.

If you use a third-party library or framework in your application, then make sure the correct usage descriptions are added to the target's Info.plist. Libraries and frameworks for ads often require several permissions you may not expect or know about.

3. Swift 3

If you open an existing project for the first time in Xcode 8 and it contains Swift, you are asked to migrate to Swift 3. If you don't feel quite ready yet, Xcode asks you to migrate to Swift 2.3 instead. You have to choose one or the other since Xcode 8 only supports these versions of the Swift language. Swift 2.2.1 and Swift 2.3 are very similar. The most important difference is that Swift 2.3 is compatible with iOS 10, tvOS 10, watchOS 3, and macOS 10.12.

Should You Migrate Today?

Should you migrate to Swift 3 today? Probably not. But don't wait too long. At some point, Apple will require developers to submit applications with Xcode 8, which doesn't support Swift 2.2.1. You could stick with Swift 2.3, but why wouldn't you just make the jump?

It is true that migrating a project to Swift 3 has a dramatic impact on your project's codebase. Almost every line of code changes in some way. The API changes are substantial. But the upside is that you get to use Swift 3. I have been using Swift 3 for several months, and I love it. It is a major improvement over Swift 2.2.1 and Swift 2.3.

Plan Ahead

If you are working on a large project for a client, make sure you carefully plan the migration to Swift 3. For complex projects, the migration can take several days. The advantages are that you can start using the Swift 3 API, and you also benefit from the improved Swift 3 compiler powered by LLVM and Clang.

4. Enhancements and Deprecations

With every major release of iOS, Apple improves the platform by adding and removing APIs. Several frameworks have received a significant update, and the company also introduced several new frameworks.

Why is that important? If you want to stand out in the App Store, it pays off to keep your applications up to date and add support for new features of the platform. That's what this section is about.

User Notifications

The UILocalNotification class is deprecated as of iOS 10. What does this mean for you? You can still use UILocalNotification to schedule and manage local notifications, but it will probably go away at some point. But why has Apple decided to deprecate UILocalNotification? It worked fine, right?

In iOS 10, Apple introduced the User Notifications framework. As the name implies, the framework is in charge of scheduling, managing, and handling notifications, local and remote. That is what makes the framework great. Your application no longer needs to make a distinction between local and remote notifications. The User Notifications framework offers a unified API for handling local and remote notifications.

The API looks and feels very nice. The framework treats local and remote notifications the same from a developer's perspective, which makes adding notification actions easy and transparent. Handling notification actions is centralized in a concise delegate protocol. 

We have some tutorials about the User Notifications framework right here on Envato Tuts+!

You might also be interested in my recent blog posts about scheduling local notifications and notification actions with the User Notifications framework

SiriKit

One of the bigger announcements during this year's WWDC was the possibility to integrate your application with Siri through SiriKit. Even though the options are limited for the time being, if your application fits into one of the supported categories, it is a great way to set your application apart from the competition. Siri currently supports a limited number of domains, including VoIP calling, messaging, and workouts.

You integrate with Siri by adding an extension to your application. Every application integrating with Siri needs to add an intents extension. It allows your application to carry out a task in response to information Siri sends to your application. You can optionally create an intents UI extension to customize the look and feel of the resulting user interface that is presented to the user after the task is completed.

Haptic Feedback

The brand new haptic engine of iPhone 7 and iPhone 7 Plus has opened up many new possibilities for developers. In iOS 10, it is possible to use the haptic engine of the device to provide the user with tactile feedback when they perform a specific action or a particular event occurs.

Your application can drive the haptic engine through the UIFeedbackGenerator class and its three concrete subclasses:

  • UIImpactFeedbackGenerator
  • UINotificationFeedbackGenerator
  • UISelectionFeedbackGenerator

Each UIFeedbackGenerator subclass is designed for a specific scenario. If you add support for the haptic engine, you are certainly going to amaze the users of your application. Give it a try.

Core Data

Core Data is probably my favorite Cocoa framework, and Apple has made it even more awesome on iOS 10 and with the release of Swift 3. This is the biggest update the framework has seen in the last few years.

Swift 3 and Xcode 8 join forces to make Core Data easier to use than ever before. Apple also introduced the NSPersistentContainer class, which makes setting up and managing a Core Data stack a breeze.

The company even revamped the underpinnings of the framework by rethinking the interaction of the framework with SQLite. The results are truly fantastic. It is great to see that Apple continues to invest in Core Data, more than ten years after its introduction on macOS Tiger.

To learn more about Core Data, check out some of our other courses and tutorials here on Envato Tuts+.

What Should You Do?

If you build your application against the iOS 10 SDK, which means you are using Xcode 8, then you need to make sure you comply with App Transport Security and the privacy guidelines Apple has put into place. Make sure you tick those boxes first.

Even though the other enhancements and improvements are optional, I encourage you to take a look at them. For example, don't wait too long to migrate to Swift 3. You could surprise your users by adding support for the haptic engine. It's optional, but it's an opportunity to stand out in today's crowded App Store.

To learn more about Swift 3 or iOS 10, check out some of our other courses and tutorials.

2016-11-03T12:43:56.000Z2016-11-03T12:43:56.000ZBart Jacobs

Upgrade Your App to iOS 10

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-27535

In this article, I would like to talk about iOS 10 and what you need to do to prepare your apps for iOS 10. 

As with every major release, iOS 10 introduces a slew of changes and enhancements. Some are required, others are recommended, and there are also a few changes that can improve your application's user experience. Let's start with an overview of what is required if you build your application against the iOS 10 SDK.

1. App Transport Security Is Coming

The most important change isn't strictly related to iOS 10, but it is important enough that I want to discuss it first. Even though App Transport Security (ATS) has been around since iOS 9, it has always been easy to opt out of ATS by adding the following snippet to your target's Info.plist.

But that is about to change. On 1 January 2017, every application submitted to the App Store, including updates of existing applications, will need to comply with the ATS guidelines. This means that your application is required to securely communicate with web services over HTTPS.

If you read my detailed discussion of App Transport Security, then you may remember that App Transport Security defines a set of rules. The servers your application communicates with need to comply with those rules. In other words, making network requests over HTTPS isn't enough. Each server your application talks to needs to be secured by and comply with modern security standards.

You can still define exception domains in the target's Info.plist, but it is no longer allowed to opt out of App Transport Security altogether.

Local Network Connections

I recently ran into a problem related to App Transport Security. The application of a client needed to communicate with other devices on the same network. It talks to other devices using their IP address, which isn't supported by App Transport Security exception domains. And to make things even more complicated, the IP address of a device isn't fixed. It can and will change over time.

Fortunately, as of iOS 10, it is possible to resolve this issue by adding an additional key-value pair to the NSAppTransportSecurity dictionary in the target's Info.plist. By setting the value of NSAllowsLocalNetworking to YES, it is possible to disable App Transport Security for local network traffic.

Other Options

If you've been struggling with App Transport Security in the past, then I recommend taking a look at the updated App Transport Security documentation. Apple has added a few additional keys that make working with ATS less of a headache.

For example, many applications load content from the web in a web view. Your application often doesn't know what websites the user is going to visit, which makes it impossible to define exception domains for App Transport Security in the target's Info.plist. As of iOS 10, you can disable App Transport Security for web views by setting NSAllowsArbitraryLoadsInWebContent to YES in the target's Info.plist.

App Transport Security Is Required

What you need to remember is that App Transport Security is required for every application submitted to the App Store after 31 December. Opting out of App Transport Security is no longer possible. Note that the App Store review team requires an explanation from you if you partially opt out of App Transport Security by using an exception, such as NSAllowsLocalNetworking or NSAllowsArbitraryLoadsInWebContent. You can read more about this in Apple's documentation.

2. Privacy

Apple continues to invest in protecting the privacy of its customers, and that commitment also has consequences for developers. What does that mean for you?

If your application accesses a system service or device capability that requires the user's explicit permission, the user sees a system alert in which the application asks for the user's permission. The content of that alert used to be provided by the operating system if your application didn't specify one. This has changed in iOS 10.

Apple Continues to Invest In Privacy and Security

As of iOS 10, your application needs to tell the user why it needs access to a particular system service or device capability. You do this by adding a key to the target's Info.plist. If your application is localized, then you also should provide a translation for the description in the InfoPlist.strings file.

Here is a complete list of the privacy keys available in iOS 10. Most of them should look familiar, but some are new in iOS 10, such as NSSiriUsageDescription and NSAppleMusicUsageDescription.

  • HealthKit
    • NSHealthShareUsageDescription
    • NSHealthUpdateUsageDescription
  • Location
    • NSLocationUsageDescription
    • NSLocationAlwaysUsageDescription
    • NSLocationWhenInUseUsageDescription
  • NSBluetoothPeripheralUsageDescription
  • NSCalendarsUsageDescription
  • NSVoIPUsageDescription
  • NSCameraUsageDescription
  • NSContactsUsageDescription
  • NSHomeKitUsageDescription
  • NSAppleMusicUsageDescription
  • NSMicrophoneUsageDescription
  • NSMotionUsageDescription
  • NSPhotoLibraryUsageDescription
  • NSRemindersUsageDescription
  • NSSpeechRecognitionUsageDescription
  • NSSiriUsageDescription
  • NSVideoSubscriberAccountUsageDescription

If you don't add a usage description for the system services and device capabilities your application uses, a warning is shown in the console, and the system alert that asks the user for permission isn't shown to the user. As a result, your application is denied access to that particular service or capability.

It goes without saying that the App Store review team rejects any applications that violate this policy. In fact, builds uploaded to the App Store that don't comply with this policy are automatically rejected.

If you use a third-party library or framework in your application, then make sure the correct usage descriptions are added to the target's Info.plist. Libraries and frameworks for ads often require several permissions you may not expect or know about.

3. Swift 3

If you open an existing project for the first time in Xcode 8 and it contains Swift, you are asked to migrate to Swift 3. If you don't feel quite ready yet, Xcode asks you to migrate to Swift 2.3 instead. You have to choose one or the other since Xcode 8 only supports these versions of the Swift language. Swift 2.2.1 and Swift 2.3 are very similar. The most important difference is that Swift 2.3 is compatible with iOS 10, tvOS 10, watchOS 3, and macOS 10.12.

Should You Migrate Today?

Should you migrate to Swift 3 today? Probably not. But don't wait too long. At some point, Apple will require developers to submit applications with Xcode 8, which doesn't support Swift 2.2.1. You could stick with Swift 2.3, but why wouldn't you just make the jump?

It is true that migrating a project to Swift 3 has a dramatic impact on your project's codebase. Almost every line of code changes in some way. The API changes are substantial. But the upside is that you get to use Swift 3. I have been using Swift 3 for several months, and I love it. It is a major improvement over Swift 2.2.1 and Swift 2.3.

Plan Ahead

If you are working on a large project for a client, make sure you carefully plan the migration to Swift 3. For complex projects, the migration can take several days. The advantages are that you can start using the Swift 3 API, and you also benefit from the improved Swift 3 compiler powered by LLVM and Clang.

4. Enhancements and Deprecations

With every major release of iOS, Apple improves the platform by adding and removing APIs. Several frameworks have received a significant update, and the company also introduced several new frameworks.

Why is that important? If you want to stand out in the App Store, it pays off to keep your applications up to date and add support for new features of the platform. That's what this section is about.

User Notifications

The UILocalNotification class is deprecated as of iOS 10. What does this mean for you? You can still use UILocalNotification to schedule and manage local notifications, but it will probably go away at some point. But why has Apple decided to deprecate UILocalNotification? It worked fine, right?

In iOS 10, Apple introduced the User Notifications framework. As the name implies, the framework is in charge of scheduling, managing, and handling notifications, local and remote. That is what makes the framework great. Your application no longer needs to make a distinction between local and remote notifications. The User Notifications framework offers a unified API for handling local and remote notifications.

The API looks and feels very nice. The framework treats local and remote notifications the same from a developer's perspective, which makes adding notification actions easy and transparent. Handling notification actions is centralized in a concise delegate protocol. 

We have some tutorials about the User Notifications framework right here on Envato Tuts+!

You might also be interested in my recent blog posts about scheduling local notifications and notification actions with the User Notifications framework

SiriKit

One of the bigger announcements during this year's WWDC was the possibility to integrate your application with Siri through SiriKit. Even though the options are limited for the time being, if your application fits into one of the supported categories, it is a great way to set your application apart from the competition. Siri currently supports a limited number of domains, including VoIP calling, messaging, and workouts.

You integrate with Siri by adding an extension to your application. Every application integrating with Siri needs to add an intents extension. It allows your application to carry out a task in response to information Siri sends to your application. You can optionally create an intents UI extension to customize the look and feel of the resulting user interface that is presented to the user after the task is completed.

Haptic Feedback

The brand new haptic engine of iPhone 7 and iPhone 7 Plus has opened up many new possibilities for developers. In iOS 10, it is possible to use the haptic engine of the device to provide the user with tactile feedback when they perform a specific action or a particular event occurs.

Your application can drive the haptic engine through the UIFeedbackGenerator class and its three concrete subclasses:

  • UIImpactFeedbackGenerator
  • UINotificationFeedbackGenerator
  • UISelectionFeedbackGenerator

Each UIFeedbackGenerator subclass is designed for a specific scenario. If you add support for the haptic engine, you are certainly going to amaze the users of your application. Give it a try.

Core Data

Core Data is probably my favorite Cocoa framework, and Apple has made it even more awesome on iOS 10 and with the release of Swift 3. This is the biggest update the framework has seen in the last few years.

Swift 3 and Xcode 8 join forces to make Core Data easier to use than ever before. Apple also introduced the NSPersistentContainer class, which makes setting up and managing a Core Data stack a breeze.

The company even revamped the underpinnings of the framework by rethinking the interaction of the framework with SQLite. The results are truly fantastic. It is great to see that Apple continues to invest in Core Data, more than ten years after its introduction on macOS Tiger.

To learn more about Core Data, check out some of our other courses and tutorials here on Envato Tuts+.

What Should You Do?

If you build your application against the iOS 10 SDK, which means you are using Xcode 8, then you need to make sure you comply with App Transport Security and the privacy guidelines Apple has put into place. Make sure you tick those boxes first.

Even though the other enhancements and improvements are optional, I encourage you to take a look at them. For example, don't wait too long to migrate to Swift 3. You could surprise your users by adding support for the haptic engine. It's optional, but it's an opportunity to stand out in today's crowded App Store.

To learn more about Swift 3 or iOS 10, check out some of our other courses and tutorials.

2016-11-03T12:43:56.000Z2016-11-03T12:43:56.000ZBart Jacobs

Get Started With an iOS App Template in 60 Seconds

$
0
0

iOS app templates from CodeCanyon can jump-start your app development. This video will show you how to get started with your own app in only 60 seconds!

 

Universal for iOS App Template

Hey there, folks. Derrick Jensen here to tell you that integrating content from the web into your mobile apps can be quite taxing. But with a little help from the Universal for iOS Full Multi-Purpose iOS app template by Sherdleapps, life has become a little easier. Once you have purchased, downloaded and extracted the template to your local machine, you'll find there's lots of useful information—especially within the documentation folder.

Universal for iOS product page

In here, you will find out how to customize the colors of your application, create a customized about dialog, and even add your own contact information. With a good understanding of these extension points, you can open your project and navigate to the appdelegate.h file within the universal folder, where most of your configuration will be done.

Customizing the appdelegateh file

Once the app starts up, you'll be presented with a very nice layout with content that is already being presented. You will notice in the second grouping of links, you will start to run into content from providers that need some additional configuration. To begin retrieving data from these providers, simply switch back over to the documentation and find the appropriate instructions on how to get the necessary keys to enable that provider's content.

The template in action

Then simply place the provider's key or keys into the app delegate, rerun your application, navigate to the appropriate section, and voila, data at your fingertips.

Retrieving data from other services
2016-11-04T23:17:22.000Z2016-11-04T23:17:22.000ZDerek Jensen

Get Started With an iOS App Template in 60 Seconds

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-27517

iOS app templates from CodeCanyon can jump-start your app development. This video will show you how to get started with your own app in only 60 seconds!

 

Universal for iOS App Template

Hey there, folks. Derrick Jensen here to tell you that integrating content from the web into your mobile apps can be quite taxing. But with a little help from the Universal for iOS Full Multi-Purpose iOS app template by Sherdleapps, life has become a little easier. Once you have purchased, downloaded and extracted the template to your local machine, you'll find there's lots of useful information—especially within the documentation folder.

Universal for iOS product page

In here, you will find out how to customize the colors of your application, create a customized about dialog, and even add your own contact information. With a good understanding of these extension points, you can open your project and navigate to the appdelegate.h file within the universal folder, where most of your configuration will be done.

Customizing the appdelegateh file

Once the app starts up, you'll be presented with a very nice layout with content that is already being presented. You will notice in the second grouping of links, you will start to run into content from providers that need some additional configuration. To begin retrieving data from these providers, simply switch back over to the documentation and find the appropriate instructions on how to get the necessary keys to enable that provider's content.

The template in action

Then simply place the provider's key or keys into the app delegate, rerun your application, navigate to the appropriate section, and voila, data at your fingertips.

Retrieving data from other services
2016-11-04T23:17:22.000Z2016-11-04T23:17:22.000ZDerek Jensen

How to Get Started With Android's Native Development Kit

$
0
0
tag:code.tutsplus.com,2005:PostPresenter/cms-27605

With the launch of Android Studio 2.2, developing Android applications that contain C++ code has become easier than ever. In this tutorial, I'll show you how to use the Android Native Development Kit, which is usually referred to as just NDK, to create a native C++ library whose functions are available to Java classes.

Prerequisites

To be able to follow this tutorial, you will need the following:

  • the latest version of Android Studio
  • a basic understanding of C++ syntax

1. Why Write Native Code?

As a rule of thumb, you would develop an Android application using only Java. Adding C++ code increases its complexity dramatically and also reduces its portability. Nevertheless, here are some reasons why you would still want to do it:

  • To maximize performance: You can improve the performance of an Android application, though only marginally, by implementing the CPU-intensive portions of its business logic in C++.
  • To use high-performance APIs: Implementations of API specifications such as Vulkan Graphics and OpenSL ES are a part of the NDK. Therefore, Android game developers tend to use the NDK.
  • To use popular C/C++ libraries: There are numerous C and C++ libraries out there that have no Java equivalents. If you want to work with them in your Android app, using the NDK is the way to go.
  • To reuse code: As long as it doesn't contain any platform-specific dependencies, code written in C++ can be used in both Android and iOS applications, usually with minimal changes. If you are developing a large application and intend to support both the iOS and Android platforms, using C++ might improve your productivity.

2. Creating a New Project

In Android Studio 2.2 or higher, the project creation wizard allows you to quickly create new projects that support C++ code.

Start by launching Android Studio and pressing the Start a new Android Studio project button in the welcome screen. In the next screen, give your application a meaningful name and check the Include C++ Support field.

Project configuration screen

In the activity creation screen of the wizard, choose the Add No Activity option. In the final screen of the wizard, make sure that the value of the C++ Standard field is set to Toolchain Default and press the Finish button.

Toolchain selection screen

The Android NDK and the tools it depends on are not installed by default. Therefore, once the project has been generated, you'll see an error that looks like this:

No NDK found error

To fix the error, go to Tools > Android > SDK Manager and switch to the SDK Tools tab.

In the list of available developer tools, select both CMake and NDK, and press the Apply button.

SDK Tools Dialog

Once the installation completes, restart Android Studio.

3. Creating a Native Library

An Android Studio project that supports C++ has an additional source code directory called cpp. As you might have guessed, all C++ files and libraries must be placed inside it. By default, the directory has a file called native-lib.cpp. For now, we'll be writing all our C++ code inside it.

In this tutorial, we'll be creating a simple native library containing a function that calculates the area of a circle using the formula πr². The function will accept the radius of the circle as a jdouble and return the area as a jstring.

Start by adding the following include directives to the file:

jni.h is a header file containing several macro definitions, types, structures, and functions, all of which are indispensable while working with NDK. (JNI stands for Java Native Interface, and this is the framework that allows the Java Runtime to communicate with native code.) The string header file is necessary because we will be using the jstring type in our library. The math.h header file contains the value of π.

By default, in order to support polymorphism, the C++ compiler modifies the names of all the functions you define in your code. This feature is often referred to as name mangling. Due to name mangling, calling your C++ functions from Java code will lead to errors. To avoid the errors, you can disable name mangling by defining your functions inside an extern "C" block.

The names of C++ functions that are accessible via JNI must have the following format:

  • They must have a Java_ prefix.
  • They must contain a mangled form of the package name where the dots are replaced with underscores.
  • They must contain the name of the Java class they belong to.

Additionally, you must specify the visibility of the function. You can do so using the JNIEXPORT macro. By convention, most developers also include the JNICALL macro in the function definition, although it currently doesn't serve any purpose in Android.

The following code defines a function called calculateArea, which can be accessed from a Java class called MainActivity:

Note that in addition to the radius, the function also accepts a JNIEnv type, which has utility functions you can use to handle Java types, and a jobject instance, which is a reference to an instance of MainActivity. We will, of course, be creating MainActivity later in this tutorial.

Calculating the area is easy. All you need to do is multiply the M_PI macro by the square of the radius.

Just so you know how to handle strings while working with JNI, let us now create a new string containing a message saying what the area is. To do so, you can use the sprintf() function.

Because Java cannot directly handle a C++ character array, our function's return type is jstring. To convert the output array into a jstring object, you must use the NewStringUTF() function.

At this point, our C++ code is ready.

4. Using the Native Library

In the previous step, you saw that the calculateArea() function needs to belong to the MainActivity Java class. Start creating the class by right-clicking on your Java package name and selecting File > New > Empty Activity.

In the dialog that pops up, name the activity MainActivity. After making sure that the Launcher Activity option is checked, press the Finish button.

Create a new launcher activity

The native library must be loaded before it can be used. Therefore, add a static block to the class and load the library using the loadLibrary() method of the System class.

To be able to use the calculateArea() C++ function inside the activity, you must declare it as a native method.

You can now use the calculateArea() method like any ordinary Java method. For example, you can add the following code to the onCreate() method to calculate and print the area of circle whose radius is 5.5:

If you run the app, you should be able to see the following output in the logcat window:

Output in logcat

Conclusion

In this tutorial, you learned how to create a native C++ library and use it in an Android application. It is worth noting that the native build process, by default, generates a separate .so file for every single CPU architecture the NDK supports. Therefore, you can be sure that your application will run on most Android devices without any issues.

To learn more about the Android NDK, I suggest you refer to the NDK Guide.

And check out some of our other tutorials and courses on Android development!

2016-11-14T11:48:44.000Z2016-11-14T11:48:44.000ZAshraff Hathibelagal
Viewing all 1836 articles
Browse latest View live