Monday, June 5, 2017

Dahi

Fashion brand with key material is SILK


by csreladm via CSSREEL | CSS Website Awards | World best websites | website design awards | CSS Gallery

The Psychology of the Perfect Marketing Video

Video marketing is here to stay. According to Hubspot, video is the form of online content people consume most thoroughly, and the future demand for it ranks with social media posts and news articles. In addition, four times as many people prefer to watch a video about a product or service than...

[ This is a content summary only. Visit our website http://ift.tt/1b4YgHQ for full links, other content, and more! ]

by Irfan Ahmad via Digital Information World

Web Design Weekly #281

Headlines

The Mindfulness of a Manual Performance Audit

Chip Cullen shares a great list of tips and resources to help you do a manual performance audit that gives you a true understanding of what is happening under the hood with your site. (alistapart.com)

11 things I learned reading the flexbox spec (hackernoon.com)

Snag the hottest new domain name for designers

.design domains were just released and some of the best ones are still available. Normally cost $35 but Web Design Weekly subscribers get them now for only $5 (we.design)

Articles

HTTP/2 push is tougher than I thought

Jake Archibald digs into some of the finer details surrounding HTTP/2 push from the performance aspect and has produced a really thorough and easy to understand article. (jakearchibald.com)

Just Keep Scrolling. How To Design Lengthy, Lengthy Pages

Nick Babich discusses some of the benefits, things to consider and quick tips for long scrolling designs. (smashingmagazine.com)

Redux vs MobX: Which Is Best for Your Project?

A great article that gives some sound advice on which (Redux or MobX) state management solution is best for your project. (sitepoint.com)

Floating labels are problematic

With so many people adopting the floating label pattern within forms this is a pretty good read that highlights some of its flaws. (medium.com)

A Love Letter to CSS

If you are having a bad day fighting with CSS then it might be worth taking 5 minutes to read this post by TJ VanToll which gives CSS a bit of context compared to other styling workflows. (developer.telerik.com)

Tools / Resources

Browserslist

Chris Coyier gives us the lowdown on Browserlist and why it’s something we should be looking into. (css-tricks.com)

Free eBook: How to start a WordPress maintenance business

Our 50+ page eBooks covers everything you need to know to start your WordPress maintenance business. (godaddy.com)

The Sketch Course

A new tutorial series from the creator of the most popular Sketch App tutorials on Youtube. Brand new for 2017. (youtube.com)

Monitoring Jank

The team at Lever share how they went from not having any visibility into their app performance to having a fine-tuned monitoring setup. (fulcrum.lever.co)

Adding Comments to a Jekyll Blog (keirwhitaker.com)

Node v8 released (nodejs.org)

Inspiration

Writing software using a phone (medium.com)

I want to design again… (medium.com)

Jobs

Product Designer at AngelList

You should be someone who can combine design thinking with execution, to produce industry-leading work and inspire excellence. You should see yourself as a generalist, with strengths and weaknesses across the user experience design spectrum. You should know how to work with metrics. (angel.co)

Designer at WegoWise

WegoWise is looking for a Designer to create beautiful designs that visually communicate WegoWise’s value to target audiences. This role works closely with our Head of Design and Marketing team and on a variety of platforms, including our web application, landing pages, flyers, email campaigns, etc. (wegowise.com)

Need to find passionate developers or designers? Why not advertise in the next newsletter

Last but not least…

Production Progressive Web Apps With JavaScript Frameworks (youtube.com)

The post Web Design Weekly #281 appeared first on Web Design Weekly.


by Jake Bresnehan via Web Design Weekly

Building a Lean Modular Monolith with OSGi

While microservices are all the hype, notable experts warn against starting out that way. Instead you might want to build a modular monolith first, a safe bet if consider going into microservices later but do not yet have the immediate need. This article shows you how to build a lean monolith with OSGi, modular enough to be split into microservices without too much effort when that style becomes the appropriate solution for the application's scaling requirements.

A very good strategy for creating a well-modularized solution is to implement domain driven design (Eric Evans). It already focuses on business capabilities and has the notion of bounded contexts that provide the necessary modularization. In this article we will use OSGi to implement the services as it provides good support for modules (bundles) and lightweight communication between them (OSGi services). As we will see, this will also provide a nice path to microservices later.

This article does not require prior knowledge of OSGi. I will explain relevant aspects as we go along and if you come away from this article with the understanding that OSGi can be used to build a decoupled monolith in preparation for a possible move towards microservices, it achieved its goal. You can find the sources for the example application on GitHub.

Our Domain: A Modular Messaging Application

To keep the business complexity low we will use a rather simple example - a chat application. We want the application to be able to send and receive broadcast messages and implement this in three very different channels:

  • shell support
  • irc support
  • IoT support using Tinkerforge based display and motion detector

Each of these channels uses the same interfaces to send and receive messages. It should be possible to plug the channels in and out and to automatically connect them to each other. In OSGi terms each channel will be a bundle and use OSGi services to communicate with the other channels.

Don't worry if you do not have Tinkerforge hardware. Obviously the Tinkerforge module will then not work but it will not affect the other channels.

Common Project Setup and OSGi Bundles

The example project will be built using Maven and most of the general setup is done in the parent pom.

OSGi bundles are just JAR files with an enhanced manifest that contains the OSGi specific entries. A bundle has to declare which packages it imports from other bundles and which packages it exports. Fortunately most of this happens automatically by using the bnd-maven-plugin. It analyzes the Java sources and auto-creates suitable imports. The exports and other special settings are defined in a special file bnd.bnd. In most cases this file can be empty or even left out.

The two plugins below make sure each Maven module creates a valid OSGi bundle. The individual modules do not need special OSGi settings in the pom - for them it suffices to reference the parent pom that is being built here. The maven-jar-plugin defines that we want to use the MANIFEST file from bnd instead of the default Maven-generated one.

<build>
    <plugins>
        <plugin>
            <groupId>biz.aQute.bnd</groupId>
            <artifactId>bnd-maven-plugin</artifactId>
            <version>3.3.0</version>
            <executions>
                <execution>
                    <goals>
                        <goal>bnd-process</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-jar-plugin</artifactId>
            <version>2.5</version>
            <configuration>
                <archive>
                    <manifestFile>
                        ${project.build.outputDirectory}/META-INF/MANIFEST.MF
                    </manifestFile>
                </archive>
            </configuration>
        </plugin>
        <!-- ... more plugins ... -->
    </plugins>
</build>

Each of the modules we are designing below creates an OSGi bundle. The poms of each module are very simple as most of the setup is already done in the parent, so we omit these. Please take a look at the sources of the OSGi chat project to see the details.

Declarative Services

The example uses Declarative Services (DS) as a dependency injection and service framework. This is a very lightweight system defined by OSGi specs that allows to publish and use services as well as to consume configuration. DS is very well-suited for OSGi as it supports the full dynamics of OSGi where bundles and services can come and go at any time. A component in DS can offer an OSGi service and depend on other OSGi services and configuration. Each component has its own dynamic lifecycle and will only activate when all mandatory dependencies are present. It will also dynamically adapt to changes in services and configuration, so changes are applied almost instantly.

As DS takes care of the dependencies the developer can concentrate on the business domain and does not have to code the dynamics of OSGi. As a first example for a DS component see the ChatBroker service below. At runtime DS uses XML files to describe components. The bnd-maven-plugin automatically processes the DS annotations and transparently creates the XML files during the build.

The Chat API

In our simple chat domain we just need one service interface, ChatListener, to receive or send chat messages. A ChatListener listens to messages and modules that want to receive messages publish an implementation of ChatListener as an OSGi service to signal that they want to listen. This is called the whiteboard pattern and is widely used.

public interface ChatListener {

    void onMessage(ChatMessage message);

}

ChatMessage is a value object to hold all information about a chat message.

public class ChatMessage implements Serializable {

    private static final long serialVersionUID = 4385853956172948160L;

    private Date time;
    private String sender;
    private String message;
    private String senderId;

    public ChatMessage(String senderId, String sender, String message) {
        this.senderId = senderId;
        this.time = new Date();
        this.sender = sender;
        this.message = message;
    }

    // .. getters ..

}

In addition we use a ChatBroker component, which allows to send a message to all currently available listeners. This is more of a convenience service as each channel could simply implement this functionality on its own.

@Component(service = ChatBroker.class, immediate = true)
public class ChatBroker {

    private static Logger LOG = LoggerFactory.getLogger(ChatBroker.class);

    @Reference
    volatile List<ChatListener> listeners;

    public void onMessage(ChatMessage message) {
        listeners.parallelStream().forEach((listener)->send(message, listener));
    }

    private static void send(ChatMessage message, ChatListener listener) {
        try {
            listener.onMessage(message);
        } catch (Exception e) {
            LOG.warn(e.getMessage(), e);
        }
    }

}

ChatBroker is defined as a declarative service component using the DS annotations. It will offer a ChatBroker OSGi service and will activate immediately when all dependencies are present (by default DS components are only activated if their service is requested by another component).

The @Reference annotation defines a dependency on one or more OSGi services. In this case volatile List marks that the dependency is (0..n). The list is automatically populated with a thread safe representation of the currently available ChatListener services. The send method uses Java 8 streams to send to all listeners in parallel.

In this module we need a bnd.bnd file to declare that we want to export the API package. In fact this is the only tuning of the bundle creation we do in this whole example project.

Export-Package: net.lr.demo.chat.service

The Shell Module

The shell channel allows to send and receive chat messages using the Felix Gogo Shell, a command line interface (much like bash) that makes for easy communication with OSGi. See also the appnote at enroute for the Gogo shell.

The SendCommand class implements a Gogo command that sends a message to all listeners when the command send <msg> is typed in the shell. It announces itself as an OSGi service with special service properties. The scope and function define that the service implements a command and how the command is addressed. The full syntax for our command is chat:send <msg> but it can be abbreviated to send <msg> as long as send is unique.

When Felix Gogo recognizes a command on the shell, it will call a method with the name of the command and send the parameter(s) as method arguments. In case of SendCommand the parameter message is used to create a ChatMessage, which is then sent to the ChatBroker service.

@Component(service = SendCommand.class,
    property = {"osgi.command.scope=chat", "osgi.command.function=send"}
)
public class SendCommand {

    @Reference
    ChatBroker broker;

    private String id;

    @Activate
    public void activate(BundleContext context) {
        this.id = "shell" + context.getProperty(Constants.FRAMEWORK_UUID);
    }

    public void send(String message) {
        broker.onMessage(new ChatMessage(id, "shell", message));
    }

}

The ShellListener class receives a ChatMessage and prints it to the shell. It implements the ChatListener interface and publishes itself as a service, so it will become visible for ChatBroker and will be added to its list of chat listeners. When a message comes in, the onMessage method is called and simply prints to System.out, which in Gogo represents the shell.

@Component
public class ShellListener implements ChatListener {

    public void onMessage(ChatMessage message) {
        System.out.println(String.format(
                "%tT %s: %s",
                message.getTime(),
                message.getSender(),
                message.getMessage()));
    }

}

The IRC Module

Continue reading %Building a Lean Modular Monolith with OSGi%


by Nicolai Parlog via SitePoint

Angular and RxJS: Create an API Service to Talk to a REST Backend

This article is part 3 of the SitePoint Angular 2+ Tutorial on how to create a CRUD App with the Angular CLI.


  1. Part 0— The Ultimate Angular CLI Reference Guide
  2. Part 1— Getting our first version of the Todo application up and running
  3. Part 2— Creating separate components to display a list of todo's and a single todo
  4. Part 3— Update the Todo service to communicate with a REST API
  5. Part 4— Use Angular Router to resolve data
  6. Part 5— Add authentication to protect private content

In part one we learned how to get our Todo application up and running and deploy it to GitHub pages. This worked just fine but, unfortunately, the whole app was crammed into a single component.

In part two we examined a more modular component architecture and learned how to break this single component into a structured tree of smaller components that are easier to understand, re-use and maintain.

In this part, we will update our application to communicate with a REST API back-end.

You don't need to have followed part one or two of this tutorial, for three to make sense. You can simply grab a copy of our repo, checkout the code from part two, and use that as a starting point. This is explained in more detail below.

A Quick Recap

Here is what our application architecture looked like at the end of part 2:

Application Architecture

Currently the TodoDataService stores all data in memory. In this third article, we will update our application to communicate with a REST API back-end instead.

We will:

  • create a mock REST API back-end
  • store the API URL as an environment variable
  • create an ApiService to communicate with the REST API
  • update the TodoDataService to use the new ApiService
  • update the AppComponent to handle asynchronous API calls
  • create an ApiMockService to avoid real HTTP calls when running unit tests

Application Architecture

By the end of this article, you will understand:

  • how you can use environment variables to store application settings
  • how you can use the Angular HTTP client to perform HTTP requests
  • how you can deal with Observables that are returned by the Angular HTTP client
  • how you can mock HTTP calls to avoid making real HTTP request when running unit tests

So, let's get started!

Up and Running

Make sure you have the latest version of the Angular CLI installed. If you don't, you can install this with the following command:

npm install -g @angular/cli@latest

If you need to remove a previous version of the Angular CLI, you can:

npm uninstall -g @angular/cli angular-cli
npm cache clean
npm install -g @angular/cli@latest

After that, you'll need a copy of the code from part two. This is available at http://ift.tt/2mpeXuK. Each article in this series has a corresponding tag in the repository so you can switch back and forth between the different states of the application.

The code that we ended with in part two and that we start with in this article is tagged as part-2. The code that we end this article with is tagged as part-3.

You can think of tags like an alias to a specific commit id. You can switch between them using git checkout. You can read more on that here.

So, to get up and running (the latest version of the Angular CLI installed) we would do:

git clone git@github.com:sitepoint-editors/angular-todo-app.git
cd angular-todo-app
git checkout part-2
npm install
ng serve

Then visit http://localhost:4200/. If all is well, you should see the working Todo app.

Setting up a REST API back-end

Let's use json-server to quickly set up a mock back-end.

From the root of the application, run:

npm install json-server --save

Next, in the root directory of our application, create a file called db.json with the following contents:

{
  "todos": [
    {
      "id": 1,
      "title": "Read SitePoint article",
      "complete": false
    },
    {
      "id": 2,
      "title": "Clean inbox",
      "complete": false
    },
    {
      "id": 3,
      "title": "Make restaurant reservation",
      "complete": false
    }
  ]
} 

Finally, add a script to package.json to start our back-end:

"scripts": {
  ...
  "json-server": "json-server --watch db.json"
}

We can now launch our REST API using:

npm run json-server

which should display:

  \{^_^}/ hi!

  Loading db.json
  Done

  Resources
  http://localhost:3000/todos

  Home
  http://localhost:3000

That's it! We now have a REST API listening on port 3000.

To verify that your back-end is running as expected, you can navigate your browser to http://localhost:3000.

The following endpoints are supported:

  • GET /todos: get all existing todo's
  • GET /todos/:id: get an existing todo
  • POST /todos: create a new todo
  • PUT /todos/:id: update an existing todo
  • DELETE /todos/:id: delete an existing todo

so if you navigate your browser to http://localhost:3000/todos, you should see a JSON response with all todo's from db.json.

To learn more about json-server, make sure to check out mock REST API's using json-server.

Storing the API URL

Now that we have our back-end in place, we must store its URL in our Angular application.

Ideally, we should be able to:

  1. store the URL in a single place so that we only have to change it once when we need to change its value
  2. make our application connect to a development API during development and connect to a production API in production

Luckily, Angular CLI supports environments. By default, there are two environments: development and production, both with a corresponding environment file: src/environments/environment.ts and 'src/environments/environment.prod.ts.

Let's add our API URL to both files:

// src/environments/environment.ts
// used when we run `ng serve` or `ng build`
export const environment = {
  production: false,

  // URL of development API
  apiUrl: 'http://localhost:3000'
};

// src/environments/environment.prod.ts
// used when we run `ng serve --environment prod` or `ng build --environment prod`
export const environment = {
  production: true,

  // URL of production API
  apiUrl: 'http://localhost:3000'
};

This will later allow us to get the API URL from our environment in our Angular application by doing:

import { environment } from 'environments/environment';

// we can now access environment.apiUrl
const API_URL = environment.apiUrl;

When we run ng serve or ng build, Angular CLI uses the value specified in the development environment (src/environments/environment.ts).

But when we run ng serve --environment prod or ng build --environment prod, Angular CLI uses the value specified in src/environments/environment.prod.ts.

This is exactly what we need to use a different API URL for development and production, without having to change our code.

The application in this article series is not hosted in production, so we specify the same API URL in our development and production environment. This allows us to run ng serve --environment prod or ng build --environment prod locally to see if everything works as expected.

You can find the mapping between dev and prod and their corresponding environment files in .angular-cli.json:

"environments": {
  "dev": "environments/environment.ts",
  "prod": "environments/environment.prod.ts"
} 

You can also create additional environments such as staging by adding a key:

"environments": {
  "dev": "environments/environment.ts",
  "staging": "environments/environment.staging.ts",
  "prod": "environments/environment.prod.ts"
}

and creating the corresponding environment file.

To learn more about Angular CLI environments, make sure to check out the The Ultimate Angular CLI Reference Guide.

Now that we have our API URL stored in our environment, we can create an Angular service to communicate with the REST API.

Creating the Service to Communicate with the REST API

Let's use Angular CLI to create an ApiService to communicate with our REST API:

ng generate service Api --module app.module.ts

which gives the following output:

installing service
  create src/app/api.service.spec.ts
  create src/app/api.service.ts
  update src/app/app.module.ts

The --module app.module.ts option tells Angular CLI to not only create the service but to also register it as a provider in the Angular module defined in app.module.ts.

Let's open src/app/api.service.ts:

import { Injectable } from '@angular/core';

@Injectable()
export class ApiService {

  constructor() { }

} 

and inject our environment and Angular's built-in HTTP service:

import { Injectable } from '@angular/core';
import { environment } from 'environments/environment';
import { Http } from '@angular/http';

const API_URL = environment.apiUrl;

@Injectable()
export class ApiService {

  constructor(
    private http: Http
  ) {
  }

}

Before we implement the methods we need, let's have a look at Angular's HTTP service.

If you're unfamiliar with the syntax, why not buy our Premium course, Introducing TypeScript.

The Angular HTTP Service

The Angular HTTP service is available as an injectable class from @angular/http.

It is built on top of XHR/JSONP and provides us with an HTTP client that we can use to make HTTP requests from within our Angular application.

The following methods are available to perform HTTP requests:

  • delete(url, options): perform a DELETE request
  • get(url, options): perform a GET request
  • head(url, options): perform a HEAD request
  • options(url, options): perform an OPTIONS request
  • patch(url, body, options): perform a PATCH request
  • post(url, body, options): perform a POST request
  • put(url, body, options): perform a PUT request

Each of these methods returns an RxJS Observable.

In contrast to the AngularJS 1.x HTTP service methods, which returned promises, the Angular HTTP service methods return Observables.

Don't worry if you are not yet familiar with RxJS Observables. We only need the basics to get our application up and running. You can gradually learn more about the available operators when your application requires them and the ReactiveX website offers fantastic documentation.

If you want to learn more about Observables, it may also be worth checking out SitePoint's Introduction to Functional Reactive Programming with RxJS.

Implementing the ApiService Methods

If we think back of the endpoints our REST API back-end exposes:

  • GET /todos: get all existing todo's

  • GET /todos/:id: get an existing todo

  • POST /todos: create a new todo

  • PUT /todos/:id: update an existing todo

  • DELETE /todos/:id: delete an existing todo

    we can already create a rough outline of methods we need and their corresponding Angular HTTP methods:

import { Injectable } from '@angular/core';
import { environment } from 'environments/environment';

import { Http, Response } from '@angular/http';
import { Todo } from './todo';
import { Observable } from 'rxjs/Observable';

const API_URL = environment.apiUrl;

@Injectable()
export class ApiService {

  constructor(
    private http: Http
  ) {
  }

  // API: GET /todos
  public getAllTodos() {
    // will use this.http.get()
  }

  // API: POST /todos
  public createTodo(todo: Todo) {
    // will use this.http.post()
  }

  // API: GET /todos/:id
  public getTodoById(todoId: number) {
    // will use this.http.get()
  }

  // API: PUT /todos/:id
  public updateTodo(todo: Todo) {
    // will use this.http.put()
  }

  // DELETE /todos/:id
  public deleteTodoById(todoId: number) {
    // will use this.http.delete()
  }
}

Let's have a closer look at each of the methods.

Continue reading %Angular and RxJS: Create an API Service to Talk to a REST Backend%


by Jurgen Van de Moere via SitePoint

Technical SEO vs Content Marketing: Which Matters More?

Technical SEO vs Content Marketing: Which Matters More?

At WooRank, we like to define SEO as "the strategies, tactics and techniques used to rank highly in search engine results for the keywords used by your target audience in order to increase your reach and conversions." However, marketers and website owners have to decide which tactics and techniques to focus on. Especially those with smaller teams, or doing it all themselves.

Often, this decision comes down to choosing technical SEO vs. content.

But which one is more important? Where should you focus your time and effort?

What Is Technical SEO?

Technical SEO is, basically, the way your website is setup to help search engines read and/or interpret your page content, and to provide humans with a great user experience.

Technical SEO includes, but isn’t limited to:

  • Robots.txt: Text files that live in your website’s root directory, robots.txt is a set of instructions that tell crawlers what they can and can’t crawl. Disallow low value pages, duplicate pages and other content you don’t want indexed.
  • Meta robots tag: Similar to robots.txt, the meta robots tag uses the content="” attribute to tell crawlers not to index a page (NoIndex), and/or follow any of the page’s links (NoFollow). Note that the NoFollow command applies to the whole page. Add the rel=”nofollow” attribute to an anchor tag to nofollow individual links.
  • XML sitemap: Sitemaps contain the list of every page on a website, along with some important details about those pages. Search engines use them to find pages, as well as figure out how often is should crawl a site. Any page you want to appear in SERPs should be in your sitemap.
  • Page speed: Page speed and load time are really important for user experience and SEO. Optimize your images, caching and redirects; and use G-Zip compression to improve load time.
  • Structured data: Structured data, like RDF, microdata or JSON-LD, help computers interpret the context of the words used in your text. It’s how you harness the power of the semantic web for your benefit. Google relies on structured data to create its rich search results, and the better they can interpret what’s on a page, the more likely they are to serve it for a relevant query.
  • Responsive design: Websites that use responsive design via the mobile viewport are more likely to be seen as mobile friendly by search engines. Responsive design scales a website to render according to the device screen, creating a better mobile user experience. This eliminates the need to create alternate versions of your website that serve based on user-agent, which is even more time and money for your development team.

What Makes Technical SEO So Important?

If you do some research, you’ll notice that other than site speed, most aspects of technical SEO aren’t ranking factors on their own. All things being equal, a site with a robots.txt file isn’t necessarily going to outrank a site without one.

So why should you spend all this time working on something that’s not going to give you a boost in SERPs?

Because technical SEO can have a huge indirect impact on your rankings, and your ability to even get indexed in the first place.

Think about it: without a robots.txt file or sitemap, Googlebot could waste all of its crawl budget trying to access a folder full of images or videos. Or, if you don’t use canonical URLs, people linking alternate versions of a page will dilute your website’s link juice. No structured data? Your Knowledge Panel isn’t going to look too robust, either.

So, without technical SEO, your website isn’t going to go very far with Google.

Website with only one page indexed by Google

Poor Pijonares.

What Is Content Marketing?

First a definition:

Content marketing is a strategic marketing approach focused on creating and distributing valuable, relevant, and consistent content to attract and retain a clearly-defined audience — and, ultimately, to drive profitable customer action.

- Content Marketing Institute

Second: a truism:

Content is king.

You’ve probably heard that before - it’s been around since Bill Gates published it in 1996.

What makes content king?

Content is the whole reason people go to your website from search results. They want to read, watch or listen to whatever is on the landing page (and, hopefully, take an action based on that). Inbound marketing like SEO relies on quality content to attract leads and customers.

The Value of Content Marketing

The value of content marketing is twofold: SEO and conversion rate optimization.

Content has several SEO benefits:

Continue reading %Technical SEO vs Content Marketing: Which Matters More?%


by Stephen Tasker via SitePoint

Getting Started with Sulu CMS on Vagrant The Right Way™

In this tutorial, we'll learn how to get started with Sulu CMS the right way - meaning, we'll deploy a Sulu "Hello World" instance using Homestead Improved and be mindful of performance issues and configuration values while we're at it. We'll also cover some common pitfalls, all in an attempt to get a good base set up for future Sulu tutorials. It is recommended you follow along with the instructions in this post and drop a comment with any problems you might run into.

Many thanks to Daniel Rotter and Patrik Karisch for helping me iron this process out!

Note that it's highly recommended to be familiar with Homestead Improved before starting out. If you're not at that level yet, you should buy our amazing book about PHP Environment Basics.


[Sidenote] Enter your project's name

This tutorial is dynamic in that it will replace all placeholders in the text below with the project name you define in the field under this paragraph. That way, the commands become very copy/paste friendly.


Generated slug:


OS X Vagrant Folder Sharing Problems

When using the NFS folder-sharing mode on OS X hosts, the vagrant-bindfs plugin will be necessary. Install it alongside your Vagrant installation with vagrant install plugin vagrant-bindfs. This is a one-time thing that'll prevent many, many headaches down the line if OS X is your main OS.

The rest is all automatic and already configured in the Homestead Improved instance, you don't need to do anything else.

Vagrant up

The first thing we do is, of course, clone the HI repo.

git clone http://ift.tt/1Lhem4x 
cd 

Next, let's configure the shared folders:

bin/folderfix.sh

This made the current working directory shared with the /Code directory inside the VM. That way, the changes made in this folder will be reflected inside the virtual machine and vice versa.

Like with any Symfony app, Sulu requires a custom virtualhost configuration for Nginx. We've made things easier by turning it into a "project type" in Homestead Improved, so all you need to do is the following modification on Homestead.yaml:

  • add the nfs folder sharing type (on OS X and Windows 10)
  • add the sulu project type and change its document root subfolder to web

The relevant sections should end up looking like this:

...

folders:
    - map: /Users/swader/vagrant_boxes/homestead/
      to: /home/vagrant/Code
      type: nfs

sites:
    - map: .app
      to: /home/vagrant/Code//web
      type: sulu

Finally, let's fire up the VM.

vagrant up; vagrant ssh

Protip: Useful aliases to set up for future use:

alias vh='vagrant halt; cd ..'
alias vush='vagrant up; vagrant ssh'

Setting up Sulu

Creating the Project

Let's install Sulu's standard edition (the minimal edition is actually "standard" now, whereas the old "standard" is deprecated - they're working on renaming this).

cd Code
composer create-project sulu/sulu-minimal 

Note that the docs currently suggest adding a -n flag at the end of that Composer command which means "No interactive questions". I like it when an installer asks me about things I'm supposed to configure anyway, so I omitted it.

Continue reading %Getting Started with Sulu CMS on Vagrant The Right Way™%


by Bruno Skvorc via SitePoint