Fashion brand with key material is SILK
by csreladm via CSSREEL | CSS Website Awards | World best websites | website design awards | CSS Gallery
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Fashion brand with key material is SILK
Chip Cullen shares a great list of tips and resources to help you do a manual performance audit that gives you a true understanding of what is happening under the hood with your site. (alistapart.com)
.design domains were just released and some of the best ones are still available. Normally cost $35 but Web Design Weekly subscribers get them now for only $5 (we.design)
Jake Archibald digs into some of the finer details surrounding HTTP/2 push from the performance aspect and has produced a really thorough and easy to understand article. (jakearchibald.com)
Nick Babich discusses some of the benefits, things to consider and quick tips for long scrolling designs. (smashingmagazine.com)
A great article that gives some sound advice on which (Redux or MobX) state management solution is best for your project. (sitepoint.com)
With so many people adopting the floating label pattern within forms this is a pretty good read that highlights some of its flaws. (medium.com)
If you are having a bad day fighting with CSS then it might be worth taking 5 minutes to read this post by TJ VanToll which gives CSS a bit of context compared to other styling workflows. (developer.telerik.com)
Chris Coyier gives us the lowdown on Browserlist and why it’s something we should be looking into. (css-tricks.com)
Our 50+ page eBooks covers everything you need to know to start your WordPress maintenance business. (godaddy.com)
A new tutorial series from the creator of the most popular Sketch App tutorials on Youtube. Brand new for 2017. (youtube.com)
The team at Lever share how they went from not having any visibility into their app performance to having a fine-tuned monitoring setup. (fulcrum.lever.co)
You should be someone who can combine design thinking with execution, to produce industry-leading work and inspire excellence. You should see yourself as a generalist, with strengths and weaknesses across the user experience design spectrum. You should know how to work with metrics. (angel.co)
WegoWise is looking for a Designer to create beautiful designs that visually communicate WegoWise’s value to target audiences. This role works closely with our Head of Design and Marketing team and on a variety of platforms, including our web application, landing pages, flyers, email campaigns, etc. (wegowise.com)
The post Web Design Weekly #281 appeared first on Web Design Weekly.
While microservices are all the hype, notable experts warn against starting out that way. Instead you might want to build a modular monolith first, a safe bet if consider going into microservices later but do not yet have the immediate need. This article shows you how to build a lean monolith with OSGi, modular enough to be split into microservices without too much effort when that style becomes the appropriate solution for the application's scaling requirements.
A very good strategy for creating a well-modularized solution is to implement domain driven design (Eric Evans). It already focuses on business capabilities and has the notion of bounded contexts that provide the necessary modularization. In this article we will use OSGi to implement the services as it provides good support for modules (bundles) and lightweight communication between them (OSGi services). As we will see, this will also provide a nice path to microservices later.
This article does not require prior knowledge of OSGi. I will explain relevant aspects as we go along and if you come away from this article with the understanding that OSGi can be used to build a decoupled monolith in preparation for a possible move towards microservices, it achieved its goal. You can find the sources for the example application on GitHub.
To keep the business complexity low we will use a rather simple example - a chat application. We want the application to be able to send and receive broadcast messages and implement this in three very different channels:
Each of these channels uses the same interfaces to send and receive messages. It should be possible to plug the channels in and out and to automatically connect them to each other. In OSGi terms each channel will be a bundle and use OSGi services to communicate with the other channels.
Don't worry if you do not have Tinkerforge hardware. Obviously the Tinkerforge module will then not work but it will not affect the other channels.
The example project will be built using Maven and most of the general setup is done in the parent pom.
OSGi bundles are just JAR files with an enhanced manifest that contains the OSGi specific entries. A bundle has to declare which packages it imports from other bundles and which packages it exports. Fortunately most of this happens automatically by using the bnd-maven-plugin. It analyzes the Java sources and auto-creates suitable imports. The exports and other special settings are defined in a special file bnd.bnd
. In most cases this file can be empty or even left out.
The two plugins below make sure each Maven module creates a valid OSGi bundle. The individual modules do not need special OSGi settings in the pom - for them it suffices to reference the parent pom that is being built here. The maven-jar-plugin defines that we want to use the MANIFEST
file from bnd instead of the default Maven-generated one.
<build>
<plugins>
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-maven-plugin</artifactId>
<version>3.3.0</version>
<executions>
<execution>
<goals>
<goal>bnd-process</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>2.5</version>
<configuration>
<archive>
<manifestFile>
${project.build.outputDirectory}/META-INF/MANIFEST.MF
</manifestFile>
</archive>
</configuration>
</plugin>
<!-- ... more plugins ... -->
</plugins>
</build>
Each of the modules we are designing below creates an OSGi bundle. The poms of each module are very simple as most of the setup is already done in the parent, so we omit these. Please take a look at the sources of the OSGi chat project to see the details.
The example uses Declarative Services (DS) as a dependency injection and service framework. This is a very lightweight system defined by OSGi specs that allows to publish and use services as well as to consume configuration. DS is very well-suited for OSGi as it supports the full dynamics of OSGi where bundles and services can come and go at any time. A component in DS can offer an OSGi service and depend on other OSGi services and configuration. Each component has its own dynamic lifecycle and will only activate when all mandatory dependencies are present. It will also dynamically adapt to changes in services and configuration, so changes are applied almost instantly.
As DS takes care of the dependencies the developer can concentrate on the business domain and does not have to code the dynamics of OSGi. As a first example for a DS component see the ChatBroker
service below. At runtime DS uses XML files to describe components. The bnd-maven-plugin automatically processes the DS annotations and transparently creates the XML files during the build.
In our simple chat domain we just need one service interface, ChatListener
, to receive or send chat messages. A ChatListener
listens to messages and modules that want to receive messages publish an implementation of ChatListener
as an OSGi service to signal that they want to listen. This is called the whiteboard pattern and is widely used.
public interface ChatListener {
void onMessage(ChatMessage message);
}
ChatMessage
is a value object to hold all information about a chat message.
public class ChatMessage implements Serializable {
private static final long serialVersionUID = 4385853956172948160L;
private Date time;
private String sender;
private String message;
private String senderId;
public ChatMessage(String senderId, String sender, String message) {
this.senderId = senderId;
this.time = new Date();
this.sender = sender;
this.message = message;
}
// .. getters ..
}
In addition we use a ChatBroker
component, which allows to send a message to all currently available listeners. This is more of a convenience service as each channel could simply implement this functionality on its own.
@Component(service = ChatBroker.class, immediate = true)
public class ChatBroker {
private static Logger LOG = LoggerFactory.getLogger(ChatBroker.class);
@Reference
volatile List<ChatListener> listeners;
public void onMessage(ChatMessage message) {
listeners.parallelStream().forEach((listener)->send(message, listener));
}
private static void send(ChatMessage message, ChatListener listener) {
try {
listener.onMessage(message);
} catch (Exception e) {
LOG.warn(e.getMessage(), e);
}
}
}
ChatBroker
is defined as a declarative service component using the DS annotations. It will offer a ChatBroker
OSGi service and will activate immediately when all dependencies are present (by default DS components are only activated if their service is requested by another component).
The @Reference
annotation defines a dependency on one or more OSGi services. In this case volatile List
marks that the dependency is (0..n)
. The list is automatically populated with a thread safe representation of the currently available ChatListener
services. The send
method uses Java 8 streams to send to all listeners in parallel.
In this module we need a bnd.bnd
file to declare that we want to export the API package. In fact this is the only tuning of the bundle creation we do in this whole example project.
Export-Package: net.lr.demo.chat.service
The shell channel allows to send and receive chat messages using the Felix Gogo Shell, a command line interface (much like bash) that makes for easy communication with OSGi. See also the appnote at enroute for the Gogo shell.
The SendCommand
class implements a Gogo command that sends a message to all listeners when the command send <msg>
is typed in the shell. It announces itself as an OSGi service with special service properties. The scope and function define that the service implements a command and how the command is addressed. The full syntax for our command is chat:send <msg>
but it can be abbreviated to send <msg>
as long as send
is unique.
When Felix Gogo recognizes a command on the shell, it will call a method with the name of the command and send the parameter(s) as method arguments. In case of SendCommand
the parameter message is used to create a ChatMessage
, which is then sent to the ChatBroker
service.
@Component(service = SendCommand.class,
property = {"osgi.command.scope=chat", "osgi.command.function=send"}
)
public class SendCommand {
@Reference
ChatBroker broker;
private String id;
@Activate
public void activate(BundleContext context) {
this.id = "shell" + context.getProperty(Constants.FRAMEWORK_UUID);
}
public void send(String message) {
broker.onMessage(new ChatMessage(id, "shell", message));
}
}
The ShellListener
class receives a ChatMessage
and prints it to the shell. It implements the ChatListener
interface and publishes itself as a service, so it will become visible for ChatBroker
and will be added to its list of chat listeners. When a message comes in, the onMessage
method is called and simply prints to System.out
, which in Gogo represents the shell.
@Component
public class ShellListener implements ChatListener {
public void onMessage(ChatMessage message) {
System.out.println(String.format(
"%tT %s: %s",
message.getTime(),
message.getSender(),
message.getMessage()));
}
}
Continue reading %Building a Lean Modular Monolith with OSGi%
This article is part 3 of the SitePoint Angular 2+ Tutorial on how to create a CRUD App with the Angular CLI.
In part one we learned how to get our Todo application up and running and deploy it to GitHub pages. This worked just fine but, unfortunately, the whole app was crammed into a single component.
In part two we examined a more modular component architecture and learned how to break this single component into a structured tree of smaller components that are easier to understand, re-use and maintain.
In this part, we will update our application to communicate with a REST API back-end.
You don't need to have followed part one or two of this tutorial, for three to make sense. You can simply grab a copy of our repo, checkout the code from part two, and use that as a starting point. This is explained in more detail below.
Here is what our application architecture looked like at the end of part 2:
Currently the TodoDataService
stores all data in memory. In this third article, we will update our application to communicate with a REST API back-end instead.
We will:
ApiService
to communicate with the REST APITodoDataService
to use the new ApiService
AppComponent
to handle asynchronous API callsApiMockService
to avoid real HTTP calls when running unit testsBy the end of this article, you will understand:
So, let's get started!
Make sure you have the latest version of the Angular CLI installed. If you don't, you can install this with the following command:
npm install -g @angular/cli@latest
If you need to remove a previous version of the Angular CLI, you can:
npm uninstall -g @angular/cli angular-cli
npm cache clean
npm install -g @angular/cli@latest
After that, you'll need a copy of the code from part two. This is available at http://ift.tt/2mpeXuK. Each article in this series has a corresponding tag in the repository so you can switch back and forth between the different states of the application.
The code that we ended with in part two and that we start with in this article is tagged as part-2. The code that we end this article with is tagged as part-3.
You can think of tags like an alias to a specific commit id. You can switch between them using
git checkout
. You can read more on that here.
So, to get up and running (the latest version of the Angular CLI installed) we would do:
git clone git@github.com:sitepoint-editors/angular-todo-app.git
cd angular-todo-app
git checkout part-2
npm install
ng serve
Then visit http://localhost:4200/. If all is well, you should see the working Todo app.
Let's use json-server to quickly set up a mock back-end.
From the root of the application, run:
npm install json-server --save
Next, in the root directory of our application, create a file called db.json
with the following contents:
{
"todos": [
{
"id": 1,
"title": "Read SitePoint article",
"complete": false
},
{
"id": 2,
"title": "Clean inbox",
"complete": false
},
{
"id": 3,
"title": "Make restaurant reservation",
"complete": false
}
]
}
Finally, add a script to package.json
to start our back-end:
"scripts": {
...
"json-server": "json-server --watch db.json"
}
We can now launch our REST API using:
npm run json-server
which should display:
\{^_^}/ hi!
Loading db.json
Done
Resources
http://localhost:3000/todos
Home
http://localhost:3000
That's it! We now have a REST API listening on port 3000.
To verify that your back-end is running as expected, you can navigate your browser to http://localhost:3000
.
The following endpoints are supported:
GET /todos
: get all existing todo'sGET /todos/:id
: get an existing todoPOST /todos
: create a new todoPUT /todos/:id
: update an existing todoDELETE /todos/:id
: delete an existing todoso if you navigate your browser to http://localhost:3000/todos
, you should see a JSON response with all todo's from db.json
.
To learn more about json-server, make sure to check out mock REST API's using json-server.
Now that we have our back-end in place, we must store its URL in our Angular application.
Ideally, we should be able to:
Luckily, Angular CLI supports environments. By default, there are two environments: development and production, both with a corresponding environment file: src/environments/environment.ts
and 'src/environments/environment.prod.ts
.
Let's add our API URL to both files:
// src/environments/environment.ts
// used when we run `ng serve` or `ng build`
export const environment = {
production: false,
// URL of development API
apiUrl: 'http://localhost:3000'
};
// src/environments/environment.prod.ts
// used when we run `ng serve --environment prod` or `ng build --environment prod`
export const environment = {
production: true,
// URL of production API
apiUrl: 'http://localhost:3000'
};
This will later allow us to get the API URL from our environment in our Angular application by doing:
import { environment } from 'environments/environment';
// we can now access environment.apiUrl
const API_URL = environment.apiUrl;
When we run ng serve
or ng build
, Angular CLI uses the value specified in the development environment (src/environments/environment.ts
).
But when we run ng serve --environment prod
or ng build --environment prod
, Angular CLI uses the value specified in src/environments/environment.prod.ts
.
This is exactly what we need to use a different API URL for development and production, without having to change our code.
The application in this article series is not hosted in production, so we specify the same API URL in our development and production environment. This allows us to run
ng serve --environment prod
orng build --environment prod
locally to see if everything works as expected.
You can find the mapping between dev
and prod
and their corresponding environment files in .angular-cli.json
:
"environments": {
"dev": "environments/environment.ts",
"prod": "environments/environment.prod.ts"
}
You can also create additional environments such as staging
by adding a key:
"environments": {
"dev": "environments/environment.ts",
"staging": "environments/environment.staging.ts",
"prod": "environments/environment.prod.ts"
}
and creating the corresponding environment file.
To learn more about Angular CLI environments, make sure to check out the The Ultimate Angular CLI Reference Guide.
Now that we have our API URL stored in our environment, we can create an Angular service to communicate with the REST API.
Let's use Angular CLI to create an ApiService
to communicate with our REST API:
ng generate service Api --module app.module.ts
which gives the following output:
installing service
create src/app/api.service.spec.ts
create src/app/api.service.ts
update src/app/app.module.ts
The --module app.module.ts
option tells Angular CLI to not only create the service but to also register it as a provider in the Angular module defined in app.module.ts
.
Let's open src/app/api.service.ts
:
import { Injectable } from '@angular/core';
@Injectable()
export class ApiService {
constructor() { }
}
and inject our environment and Angular's built-in HTTP service:
import { Injectable } from '@angular/core';
import { environment } from 'environments/environment';
import { Http } from '@angular/http';
const API_URL = environment.apiUrl;
@Injectable()
export class ApiService {
constructor(
private http: Http
) {
}
}
Before we implement the methods we need, let's have a look at Angular's HTTP service.
If you're unfamiliar with the syntax, why not buy our Premium course, Introducing TypeScript.
The Angular HTTP service is available as an injectable class from @angular/http
.
It is built on top of XHR/JSONP and provides us with an HTTP client that we can use to make HTTP requests from within our Angular application.
The following methods are available to perform HTTP requests:
delete(url, options)
: perform a DELETE requestget(url, options)
: perform a GET requesthead(url, options)
: perform a HEAD requestoptions(url, options)
: perform an OPTIONS requestpatch(url, body, options)
: perform a PATCH requestpost(url, body, options)
: perform a POST requestput(url, body, options)
: perform a PUT requestEach of these methods returns an RxJS Observable.
In contrast to the AngularJS 1.x HTTP service methods, which returned promises, the Angular HTTP service methods return Observables.
Don't worry if you are not yet familiar with RxJS Observables. We only need the basics to get our application up and running. You can gradually learn more about the available operators when your application requires them and the ReactiveX website offers fantastic documentation.
If you want to learn more about Observables, it may also be worth checking out SitePoint's Introduction to Functional Reactive Programming with RxJS.
If we think back of the endpoints our REST API back-end exposes:
GET /todos
: get all existing todo's
GET /todos/:id
: get an existing todo
POST /todos
: create a new todo
PUT /todos/:id
: update an existing todo
DELETE /todos/:id
: delete an existing todo
we can already create a rough outline of methods we need and their corresponding Angular HTTP methods:
import { Injectable } from '@angular/core';
import { environment } from 'environments/environment';
import { Http, Response } from '@angular/http';
import { Todo } from './todo';
import { Observable } from 'rxjs/Observable';
const API_URL = environment.apiUrl;
@Injectable()
export class ApiService {
constructor(
private http: Http
) {
}
// API: GET /todos
public getAllTodos() {
// will use this.http.get()
}
// API: POST /todos
public createTodo(todo: Todo) {
// will use this.http.post()
}
// API: GET /todos/:id
public getTodoById(todoId: number) {
// will use this.http.get()
}
// API: PUT /todos/:id
public updateTodo(todo: Todo) {
// will use this.http.put()
}
// DELETE /todos/:id
public deleteTodoById(todoId: number) {
// will use this.http.delete()
}
}
Let's have a closer look at each of the methods.
Continue reading %Angular and RxJS: Create an API Service to Talk to a REST Backend%
At WooRank, we like to define SEO as "the strategies, tactics and techniques used to rank highly in search engine results for the keywords used by your target audience in order to increase your reach and conversions." However, marketers and website owners have to decide which tactics and techniques to focus on. Especially those with smaller teams, or doing it all themselves.
Often, this decision comes down to choosing technical SEO vs. content.
But which one is more important? Where should you focus your time and effort?
Technical SEO is, basically, the way your website is setup to help search engines read and/or interpret your page content, and to provide humans with a great user experience.
Technical SEO includes, but isn’t limited to:
If you do some research, you’ll notice that other than site speed, most aspects of technical SEO aren’t ranking factors on their own. All things being equal, a site with a robots.txt file isn’t necessarily going to outrank a site without one.
So why should you spend all this time working on something that’s not going to give you a boost in SERPs?
Because technical SEO can have a huge indirect impact on your rankings, and your ability to even get indexed in the first place.
Think about it: without a robots.txt file or sitemap, Googlebot could waste all of its crawl budget trying to access a folder full of images or videos. Or, if you don’t use canonical URLs, people linking alternate versions of a page will dilute your website’s link juice. No structured data? Your Knowledge Panel isn’t going to look too robust, either.
So, without technical SEO, your website isn’t going to go very far with Google.
Poor Pijonares.
First a definition:
Content marketing is a strategic marketing approach focused on creating and distributing valuable, relevant, and consistent content to attract and retain a clearly-defined audience — and, ultimately, to drive profitable customer action.
- Content Marketing Institute
Second: a truism:
Content is king.
You’ve probably heard that before - it’s been around since Bill Gates published it in 1996.
What makes content king?
Content is the whole reason people go to your website from search results. They want to read, watch or listen to whatever is on the landing page (and, hopefully, take an action based on that). Inbound marketing like SEO relies on quality content to attract leads and customers.
The value of content marketing is twofold: SEO and conversion rate optimization.
Content has several SEO benefits:
Continue reading %Technical SEO vs Content Marketing: Which Matters More?%
In this tutorial, we'll learn how to get started with Sulu CMS the right way - meaning, we'll deploy a Sulu "Hello World" instance using Homestead Improved and be mindful of performance issues and configuration values while we're at it. We'll also cover some common pitfalls, all in an attempt to get a good base set up for future Sulu tutorials. It is recommended you follow along with the instructions in this post and drop a comment with any problems you might run into.
Many thanks to Daniel Rotter and Patrik Karisch for helping me iron this process out!
Note that it's highly recommended to be familiar with Homestead Improved before starting out. If you're not at that level yet, you should buy our amazing book about PHP Environment Basics.
This tutorial is dynamic in that it will replace all placeholders in the text below with the project name you define in the field under this paragraph. That way, the commands become very copy/paste friendly.
Generated slug:
When using the NFS folder-sharing mode on OS X hosts, the vagrant-bindfs plugin will be necessary. Install it alongside your Vagrant installation with vagrant install plugin vagrant-bindfs
. This is a one-time thing that'll prevent many, many headaches down the line if OS X is your main OS.
The rest is all automatic and already configured in the Homestead Improved instance, you don't need to do anything else.
The first thing we do is, of course, clone the HI repo.
git clone http://ift.tt/1Lhem4x
cd
Next, let's configure the shared folders:
bin/folderfix.sh
This made the current working directory shared with the /Code
directory inside the VM. That way, the changes made in this folder will be reflected inside the virtual machine and vice versa.
Like with any Symfony app, Sulu requires a custom virtualhost configuration for Nginx. We've made things easier by turning it into a "project type" in Homestead Improved, so all you need to do is the following modification on Homestead.yaml
:
nfs
folder sharing type (on OS X and Windows 10)sulu
project type and change its document root subfolder to web
The relevant sections should end up looking like this:
...
folders:
- map: /Users/swader/vagrant_boxes/homestead/
to: /home/vagrant/Code
type: nfs
sites:
- map: .app
to: /home/vagrant/Code//web
type: sulu
Finally, let's fire up the VM.
vagrant up; vagrant ssh
Protip: Useful aliases to set up for future use:
alias vh='vagrant halt; cd ..' alias vush='vagrant up; vagrant ssh'
Let's install Sulu's standard edition (the minimal
edition is actually "standard" now, whereas the old "standard" is deprecated - they're working on renaming this).
cd Code
composer create-project sulu/sulu-minimal
Note that the docs currently suggest adding a -n
flag at the end of that Composer command which means "No interactive questions". I like it when an installer asks me about things I'm supposed to configure anyway, so I omitted it.
Continue reading %Getting Started with Sulu CMS on Vagrant The Right Way™%