Thursday, July 7, 2016

Adding Google Analytics to your React Application

Adding any kind of tracking to any project always seems to be an afterthought. Generally just before launching, a stakeholder puts their hand up and states that we need to track everything…. Usually resulting in lots of frustrating mutterings from all developers involved.

The above scenario rings too close to home for me. Just last week I was the developer sitting in “that” meeting realising I needed to add tracking to a React based application in record time. Thankfully, things turned out to be extremely easy. The application I was working on wasn’t too complex and was well structured.

This is how I went about it.

The business I was helping uses Google Analytics for all their tracking needs, so we decided to continue with that.

The project relies heavily on npm, so by running the following command I made sure the React-GA Module was saved as a dependency in the package.json file like so:


npm install react-ga --save

Having now installed React-GA it was time to import the module into the required file and initialise our unique tracking number.


import ReactGA from 'react-ga';`
ReactGA.initialize('UA-XXXXXXXX'); //Unique Google Analytics tracking number

The next step was to trigger page views with each view trigger. In my case, I needed to hook into the React Router onUpdate method and fire a function that looked at the hash of the URL due to the history setting of the router.


function fireTracking() {
    ReactGA.pageview(window.location.hash);
}

<Router onUpdate={fireTracking} history={hashHistory}>
    ...
</Router>

One thing to note is you may need to adjust the window.location argument you push to the ReactGA.pageview() function. It will really depend how you have set up React Router.

All in all, pretty straight forward. Around 5 line of code.

With all tracking, the more metrics, the better. So with the above being extremely quick to implement I sat down with he marketing team to work on integrating custom events for particular actions within the app.

I’ll save you from the finer details but with the above ground work laid, adding a custom Google Analytics Event within the React application went like this:

Call a function (e.g. handleClick()) within an onClick event that fires a custom event.


import React from 'react';
import ReactGA from 'react-ga';
    
export default class SomeComponent extends React.Component {
    
    handleClick() {
        ReactGA.event({
            category: 'Navigation',
            action: 'Clicked Link',
        });
    }
    
    render() {

        return (
            <div>
                <a onClick={()=>{this.handleClick()}}>Link</a>
            </div>
        );
    }
}

Hopefully the above gives you a little insight into how to go about adding custom events within your React components.

It’s quite a trivial example, but opens the door to lots of possibilities.

Happy tracking!

The post Adding Google Analytics to your React Application appeared first on Web Design Weekly.


by Jake Bresnehan via Web Design Weekly

How to create passive income on Social Media in three easy steps

How to create passive income on Social Media in three easy steps

A question as old as the internet itself is: how do you make money online? Either you or one of your friends has often wondered just how possible it would be to earn money with a little help from the internet and social media. And the fact is, it isn’t really that hard.

The internet has been accessed by over 3 billion people worldwide while social media sites such as Instagram, Twitter, and Facebook boast a highly impressive active user rate of 1.2 billion and above. This means that you have nearly 2 billion people that you can reach via social media and all with the push of a few buttons.

That is a staggering figure, and is the reason that every major company in the world uses social media to its advantage, but it’s not just up to the major companies. You can create a passive income on social media too, and here is how you do it, in three easy steps.

by Guest Author via Digital Information World

Understanding ES6 Modules via Their History

This article is part of a web development series from Microsoft. Thank you for supporting the partners who make SitePoint possible.

ES6 brings the biggest changes to JavaScript in a long time, including several new features for managing large and complex codebases. These features, primarily the import and export keywords, are collectively known as modules.

If you’re new to JavaScript, especially if you come from another language that already had built-in support for modularity (variously named modules, packages, or units), the design of ES6 modules may look strange. Much of the design emerged from solutions the JavaScript community devised over the years to make up for that lack of built-in support.

We’ll look at which challenges the JavaScript community overcame with each solution, and which remained unsolved. Finally, we’ll see how those solutions influenced ES6 module design, and how ES6 modules position themselves with an eye towards the future.

First the <script> Tag, Then Controversy

At first, HTML limited itself to text oriented elements, which got processed in a very static manner. Mosaic, one of the most popular of the early browsers, wouldn’t display anything until all HTML had finished downloading. On an early ‘90s dial up connection, this could leave a user staring at a blank screen for literally minutes.

Netscape Navigator exploded in popularity almost as soon as it appeared in the mid to late ‘90s. Like a lot of current disruptive innovators, Netscape pushed boundaries with changes that weren’t universally liked (read the fascinating email thread where Marc Andreesen pretty much implements the image tag before Tim Berners-Lee can finish saying why he doesn’t like it). One of Navigator’s many innovations was rendering HTML as it downloaded, allowing users to begin reading a page as soon as possible, signaling the end for Mosaic in the process.

In a famous 10 day period in 1995, Brendan Eich created JavaScript for Netscape. They didn’t originate the idea of dynamically scripting a web page - ViolaWWW preceded them by 5 years - but much like Isaac Singer’s sewing machine, their popularity made them synonymous with the concept.

The implementation of the <script> tag went back to blocking HTML download and rendering. The limited communication resources commonly available at the time couldn’t handle fetching two data sources simultaneously, so when the browser saw <script> in the markup, it would pause HTML execution and switch to handling JS. In addition, any JS actions that affected HTML rendering, done via the browser-supplied API called DOM, placed a computational strain on even that day’s cutting edge Pentium CPUs. So when the JavaScript had finished downloading, it would parse and execute, and only after that pick up processing HTML where it had left off.

At first, very few coders did any substantial JS work. Even the name suggested that JavaScript was a lesser citizen compared to its server-side relatives like Java and ASP. Most JavaScript around the turn of the century limited itself to client side conditions the server couldn’t affect - often simple form activities like putting focus into the first field, or validating form input prior to submitting. The most common meaning of AJAX still referred to the caustic household cleaner, and almost all nontrivial actions required the full HTTP round trip to the server and back, so almost all web developers were backenders who looked down on the “toy” language.

Did you catch the gotcha in the last paragraph? Validating one form input might be simple, but validating multiple inputs on multiple forms gets complicated - and sure enough, so did JS codebases. Just as fast as it became apparent that client side scripting had undeniable usability benefits, so too the problems with vanilla script tags emerged: unpredictability with notification of DOM readiness; variable collisions in file concatenation; dependency management; you name it.

JS developers had a very easy time finding jobs, and a very hard time enjoying them. When jQuery appeared in 2006, developers adopted it warmly. Today, 65 to 70% of the top 10 million websites have jQuery installed. But it never intended and could offer little to solve the architectural issues: the “toy” language had made it to the big time, and needed big time skills.

What Exactly Did We Need?

Fortunately, other languages had already hit this complexity barrier, and found a solution: modular programming. Modules encouraged lots of best practices:

  1. Separation: Code needs to be separated into smaller chunks in order to be understandable. Best practices recommend these chunks should take the form of files.
  2. Composability: You want code in one file but reused in many others. This promotes flexibility in a codebase.
  3. Dependency management: 65% of sites might have jQuery installed, but what if your product is a site add-on, and needs a specific version? You want to reuse the installed version if it’s suitable, and load it if it’s not.
  4. Namespace management: Similar to dependency management - can you move the location of a file without rewriting core code?
  5. Consistent implementation: Everybody should not come up with their own solution to the same problems.

Early solutions

Each solution to these problems which JavaScript developers came up with had an influence on the structure of ES6 modules. We’ll review the major milestones in their evolution, and what the community learned at each step, finally showing the results in the form of today’s ES6 modules.

  1. Object Literal pattern
  2. IIFE/Revealing Module pattern
  3. CommonJS
  4. AMD
  5. UMD

Object Literal pattern

Object Literal Pattern

JavaScript already has a structure built in for organization: the object. The object literal syntax got used as an early pattern to organize code:

Example code 1


<!DOCTYPE html>
<html>
        <head>
                <script src="person.js"></script>
                <script src="author.js"></script>
        </head>
        <body>
                <script>
                    person.author.doJob('ES6 module history');
                </script>
        </body>
        <script>
                // shared scope means other code can inadvertently destroy ours
                var person ='all gone!';
        </script>
</html>

What it offered

This approach’s primary benefit was ease in understanding and implementation. Many other aspects were not at all easy.

What held it back

It relied on a variable in the global scope (person) as its root; if some other JS on the page declared a variable by the same name, your code would disappear without a trace! In addition, there’s no reusability - if we wanted to support a monkey banging on a typewriter, we’d have to duplicate the author.js file:

Example code 2

Last, the order that files load in is critical. All the versions of author will error if person (or monkey) doesn’t exist first.

IIFE/Revealing Module pattern

IIFE revealing module pattern

An IIFE (pronounced “iffy” according to the term’s coiner, Ben Alman) is an Immediately Invoked Function Expression. The function expression is the function keyword and body wrapped in the first set of parentheses. The second set of parens invokes the function, passing it whatever is inside as parameters to the function’s arguments. By returning an object from the function expression, we get the revealing module pattern:

Continue reading %Understanding ES6 Modules via Their History%


by Elias Carlston via SitePoint

This week's JavaScript news, issue 291

This week's JavaScript news
Read this e-mail on the Web
JavaScript Weekly
Issue 291 — July 7, 2016

Next Tuesday at 2pm ET, we're running a live webcast on building better CLI tools with Node if you want to come along :-)

Kent C. Dodds covers the history of JavaScript modules before looking at how the ES6 standard handles them and how they work in practice.
Kent C. Dodds

Learn how to build an Angular 2 ‘To Do’ list CRUD app using Angular CLI to generate components, services, and tests.
Todd Motto

A look at a subtle change in jQuery 3 that could cause you headaches when debugging: exceptions within the document ready callback are now swallowed.
Christian Schlensker

Quickly pinpoint what’s broken and why. Get the context and insights to defeat all JavaScript application errors.
Rollbar   Sponsored
Rollbar

A major release of the popular JavaScript linting tool, including a handful of breaking changes.
jQuery Foundation

If you missed this last year and if you’ve not got your head into React yet, enjoy this extensive introduction complete with interactive code boxes.
Shusaku Uesugi

A walkthrough of creating a tic-tac-toe game using Horizon, a realtime-focused RethinkDB-based backend for mobile-based JavaScript apps.
Wern Ancheta

Jobs Supported by Hired.com

  • Front-End Engineer - New York, Seattle and London.Are you a world-class front-end engineer interested in working on high-performance platforms and tools using the latest Javascript tech stack? Help us build next-generation products in the advertising division. AMAZON
  • Tech Lead/Senior Engineer at Red Badger (London)We’re looking for a passionate and experienced engineer to join our lean, agile & tech-loving team. This role includes getting your hands dirty in the code on a daily basis with a huge range of benefits. Apply here. Red Badger
  • JavaScript Developer at Evolution GamingWe are looking for a senior developer who would be ready to shape the future and accomplish challenging tasks, e.g., migrating stateful legacy components to functional React-Redux ones and modularising CSS with the help of css-modules.  Evolution Gaming

Can't find the right job here? Want companies to apply to you? Try Hired.com.

In brief

Looking for more on Node? Read this week's Node Weekly too :-)

Curated by Peter Cooper and published by Cooper Press.

Stop getting JavaScript Weekly : Change email address : Read this issue on the Web

© Cooper Press Ltd. Office 30, Lincoln Way, Louth, LN11 0LS, UK


by via JavaScript Weekly

A Recipe for mRuby Raspberry Pi? Just Add h2o!

It’s IoT Week at SitePoint! All week we’re publishing articles focused on the intersection of the internet and the physical world, so keep checking the IoT tag for the latest updates. When the excellent folks at SitePoint told me about IoT Week and said I needed to generate a couple of Ruby-related IoT posts, I […]

Continue reading %A Recipe for mRuby Raspberry Pi? Just Add h2o!%


by Glenn Goodrich via SitePoint

Ilya Gelfenbeyn, CEO of Api.ai, on AI and the IoT

Artificial Intelligence is a fascinating topic for many people nowadays, no matter if they are a consumer or an influencer. Today, I’m happy to be joined by Ilya Gelfenbeyn, CEO and co-founder of Api.ai, a conversational UX platform used to embed natural language understanding capabilities into connected devices, apps and services. Regular readers of SitePoint may recognize the service, as we have covered Api.ai in the past with a series earlier this year on getting started with the platform.

Ilya has a background in machine learning, natural language processing and conversational interfaces.

[caption id="attachment_134865" align="alignright" width="300"]Ilya Gelfenbeyn Ilya Gelfenbeyn, CEO of Api.ai[/caption]

Elio Qoshi: Thanks for being with us today for this interview Ilya!

Ilya Gelfenbeyn: My pleasure.

Elio: We have covered Api.ai in the past, but could you briefly explain the concept behind it?

Ilya: Sure. Imagine having an actual conversation with a product, just as you would with a human. Api.ai is a conversational user experience platform for building natural language interactions for bots, applications, services and devices. The Api.ai platform lets developers seamlessly integrate conversational chatbots and intelligent voice command into their products and services. Developers can use Api.ai for speech recognition, context awareness, and conversational management to quickly and easily differentiate their business, increase satisfaction, and improve business processes.

Elio: What are some use specific cases Api.ai can be a great fit for?

Ilya: It fits a landscape of horizontal use cases. We are seeing our technology applied in innovative, creative ways for travel, customer service, e-commerce, the Internet of Things, gaming, automotive, finance, and more. Building more engaging and personal user experiences improves customer retention, increases revenue, reduces operation costs, and promotes productivity. Conversational user experiences let us feel more connected to products, companies, and devices in a more human way while allowing us to automate for efficiency.

Elio: What do developers need to know to be able to use Api.ai? Any prerequisites?

Ilya: Api.ai makes it easy for developers (and non-coders) to design and integrate intelligent and sophisticated conversational user interfaces into their products. Once you create your bot or agent, you can quickly and easily deploy it across various pre-integrated platforms, such as Facebook Messenger, Slack, Twilio, Cisco Tropo and Spark, Skype, Kik, Telegram and more.

You can leverage several pre-built knowledge packages created for a variety of popular topics based on over two and a half billion user queries processed by the system. When enabled, your agent can understand thousands of diverse requests out of the box – no coding required. Additionally, Api.ai has a robust library of SDKs and integrations with several popular platforms and technologies.

Elio: What’s Api.ai’s origin story?

Ilya: Speaktoit was co-founded by Artem Goncharuk, Pavel Sirotin, and myself; the team specializes in human-computer interaction technology based on natural language conversations and deep neural learning. In 2011 we launched Assistant, an intelligent personal assistant app six months before Siri was released. Assistant is one of the highest rated Android apps with over 30 million subscribers.

Continue reading %Ilya Gelfenbeyn, CEO of Api.ai, on AI and the IoT%


by Elio Qoshi via SitePoint

6 Easy Ways to Leverage Social Search in WordPress

For years Google and other search engines have been essential for gaining visibility on the web, yet today social media sites such as Twitter, Facebook, Pinterest, Instagram, Google+, and other networks are major sources of traffic. These sites work a bit differently than Google.

If you’re tasked with developing or running a WordPress site, you can’t afford to ignore social search, especially since it’s relatively simple to implement. According to the Content Marketing Institute, Social Search Optimization and Search Engine Optimization go hand in hand.

Continue reading %6 Easy Ways to Leverage Social Search in WordPress%


by Charles Costa via SitePoint