"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Wednesday, June 22, 2016
Jack & Rose
by Rob Hope via One Page Love
Li-Fi: Lighting the Future of Wireless Networks
Light Fidelity, Li-Fi, is a relatively new form of wireless communication technology. It uses light signals to communicate data. The excitement surrounding Li-Fi is because it has proven to have higher speeds than Wi-Fi. In the lab, Li-Fi has reached speeds of 224 gigabits per second. The same lab field tested Li-Fi technology in a factory based in Estonia and achieved transmission rates at 1 gigabit per second.
[caption id="attachment_133339" align="aligncenter" width="674"] Source: Boston University and Science Alert[/caption]
Li-Fi was introduced to the world by Professor Harald Hass at a 2011 TED Talk. He wanted to turn the world’s light bulbs into wireless routers. Soon after the TED Talk, in 2012, he launched Pure Li-Fi to lead the Li-Fi product development. Pure Li-Fi is a company that develops Li-Fi devices. The Li-Fi Consortium was also formed with the aim of sharing information and developing the technology. The Li-Fi Consortium is an open non-profit organization — any organization can license their technology or partner with them. There are no membership fees to join the consortium.
"All we need to do is fit a small microchip to every potential illumination device and this would then combine two basic functionalities, illumination, and wireless data transmission. In the future we will not only have 14 billion light bulbs, we may have 14 billion Li-Fis deployed worldwide for a cleaner, greener, and even brighter future." Harald Hass Ted Talk 2011.
Herald Hass has proved that data can be transmitted over the light spectrum — this makes Li-Fi a form of optical wireless communication. Li-Fi uses infra-red and ultra-violet (visible light) waves to communicate data. Infra-red and ultra-violet spectrums can carry more information than radio frequency waves. This is why Li-Fi can achieve greater speeds than Wi-Fi.
Continue reading %Li-Fi: Lighting the Future of Wireless Networks%
by Brian Sebele via SitePoint
Python Virtual Environments
Overview
Many of us work on multiple Python projects at the same time. Multiple projects may depend on different versions of the same library. This is a problem. Even if you work with a single project and you deploy it to production, you may run into trouble, because the system's Python on your production server might change due to OS upgrade or security patch, and your application might fail as a result. In general, you want full control over the Python environment of your projects. Enter virtual environments...
The basic idea of a virtual environment is to have a Python interpreter and its site-packages separate from the system one. Also, you can have many of them. That solves both problems. You can assign a separate virtual environment with its own dependencies for each project and forget about conflicts with other projects and the system's Python.
In this tutorial, you'll learn the concepts behind virtual environments and how to create and use them, and you'll discover various alternatives for special situations.
Virtualenv
The virtualenv package supports this concept. You can install virtualenv using pip install virtualenv
.
Once virtualenv is installed, you can start creating virtual environments. Let's create two environments called "venv_1" and "venv_2".
~ > virtualenv ~/venv_1 Using real prefix '/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7' New python executable in /Users/gigi/venv_1/bin/python2.7 Also creating executable in /Users/gigi/venv_1/bin/python Installing setuptools, pip, wheel...done. ~ > virtualenv ~/venv_2 Using real prefix '/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7' New python executable in /Users/gigi/venv_2/bin/python2.7 Also creating executable in /Users/gigi/venv_2/bin/python Installing setuptools, pip, wheel...done.
Let's see what happened.
~ > ls -la ~/venv_1 total 16 drwxr-xr-x 7 gigi staff 238 Mar 29 23:12 . drwxr-xr-x+ 254 gigi staff 8636 Mar 29 23:12 .. lrwxr-xr-x 1 gigi staff 79 Mar 29 23:12 .Python -> /usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/Python drwxr-xr-x 16 gigi staff 544 Mar 29 23:12 bin drwxr-xr-x 3 gigi staff 102 Mar 29 23:12 include drwxr-xr-x 3 gigi staff 102 Mar 29 23:12 lib -rw-r--r-- 1 gigi staff 60 Mar 29 23:12 pip-selfcheck.json
Inside the "bin" sub-directory, you'll find a few executables and symlinks. Those include the Python interpreter itself, pip, easy_install, and most importantly a few activate scripts.
~ > ls -la ~/venv_1/bin total 136 drwxr-xr-x 16 gigi staff 544 Mar 29 23:12 . drwxr-xr-x 7 gigi staff 238 Mar 29 23:12 .. -rw-r--r-- 1 gigi staff 2077 Mar 29 23:12 activate -rw-r--r-- 1 gigi staff 1019 Mar 29 23:12 activate.csh -rw-r--r-- 1 gigi staff 2217 Mar 29 23:12 activate.fish -rw-r--r-- 1 gigi staff 1137 Mar 29 23:12 activate_this.py -rwxr-xr-x 1 gigi staff 249 Mar 29 23:12 easy_install -rwxr-xr-x 1 gigi staff 249 Mar 29 23:12 easy_install-2.7 -rwxr-xr-x 1 gigi staff 221 Mar 29 23:12 pip -rwxr-xr-x 1 gigi staff 221 Mar 29 23:12 pip2 -rwxr-xr-x 1 gigi staff 221 Mar 29 23:12 pip2.7 lrwxr-xr-x 1 gigi staff 9 Mar 29 23:12 python -> python2.7 -rwxr-xr-x 1 gigi staff 2336 Mar 29 23:12 python-config lrwxr-xr-x 1 gigi staff 9 Mar 29 23:12 python2 -> python2.7 -rwxr-xr-x 1 gigi staff 12744 Mar 29 23:12 python2.7 -rwxr-xr-x 1 gigi staff 228 Mar 29 23:12 wheel
The activate script is the key. In order to activate a specific virtual environment, you source the activate script, as in: source ~/venv_1/bin_activate
. The effect is setting a bunch of environment variables and changing the prompt to the name of the activated virtual environment. It also adds a deactivate()
shell function that will reset everything. Once a virtual environment is activated, typing python
will launch its Python with its dependencies.
~ > source ~/venv_1/bin/activate (venv_1) ~ > which python /Users/gigi/venv_1/bin/python (venv_1) ~ >
Let's deactivate:
(venv_1) ~ > deactivate ~ > which python /usr/local/bin/python
If you have multiple Python interpreters installed on your systems, you can specify which one to use for your virtual environment using the -p
option. Here is a Python 3 virtual environment:
~ > virtualenv ~/venv_3 -p /usr/local/bin/python3 Running virtualenv with interpreter /usr/local/bin/python3 Using base prefix '/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5' New python executable in /Users/gigi/venv_3/bin/python3.5 Also creating executable in /Users/gigi/venv_3/bin/python Installing setuptools, pip...done. ~ > source ~/venv_3/bin/activate (venv_3)~ > python Python 3.5.1 (default, Jan 9 2016, 19:28:52) [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>>
Virtualenv works even on pypy.
~ > virtualenv ~/venv_pypy -p `which pypy` Running virtualenv with interpreter /usr/local/bin/pypy New pypy executable in /Users/gigi/venv_pypy/bin/pypy Installing setuptools, pip...done. ~ > source ~/venv_pypy/bin/activate (venv_pypy)~ > python Python 2.7.10 (5f8302b8bf9f53056e40426f10c72151564e5b19, Feb 11 2016, 20:39:39) [PyPy 4.0.1 with GCC 4.2.1 Compatible Apple LLVM 7.0.2 (clang-700.1.81)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>>>
You can switch directly from one environment to the other without deactivating first:
(venv_3)~ > source ~/venv_2/bin/activate (venv_2) ~ > which python /Users/gigi/venv_2/bin/python
OK. Let's see how to use two different versions of the same package in two different virtual environments. This is as simple as activating each environment and installing the desired version. The environments are totally independent, so the fact that there is a different version in another environment is a non-issue.
Let's install the sh package version 1.0.0 to "venv_1".
(venv_1) ~ > pip install sh==1.0 Collecting sh==1.0.0 Downloading sh-1.0.tar.gz Building wheels for collected packages: sh Running setup.py bdist_wheel for sh ... done Stored in directory: /Users/gigi/Library/Caches/pip/wheels/f9/fb/a1/383f6dc2834b319a788a006d3ab7cc014ee852485f00b9e8c3 Successfully built sh Installing collected packages: sh Successfully installed sh-1.0 (venv_1) ~ > pip freeze | grep sh sh==1.0
Let's switch to "venv_2" and install version 1.11.
(venv_1) ~ > source ~/venv_2/bin/activate (venv_2) ~ > pip install sh==1.11 Collecting sh==1.11 Downloading sh-1.11.tar.gz Building wheels for collected packages: sh Running setup.py bdist_wheel for sh ... done Stored in directory: /Users/gigi/Library/Caches/pip/wheels/ba/4f/a5/ec77d662c3bf38f564b5ab16f1f3dbb9575922826fe810961c Successfully built sh Installing collected packages: sh Successfully installed sh-1.11 (venv_2) ~ > pip freeze | grep sh sh==1.11
Now, let's switch back to "venv_1" and verify that its version of the sh package is still 1.0.
(venv_2) ~ > source ~/venv_1/bin/activate (venv_1) ~ > pip freeze | grep sh sh==1.0 (venv_1) ~ >
Virtualenvwrapper
All that activating, deactivating and switching can get old after a while. If you manage a lot of virtual environments, it can get out of control. That's where virtualenvwrapper comes in. Virtualenvwrapper lets you list, create, delete and copy virtual environments. It also lets you switch environments easily.
Here are all the commands:
~ > virtualenvwrapper virtualenvwrapper is a set of extensions to Ian Bicking's virtualenv tool. The extensions include wrappers for creating and deleting virtual environments and otherwise managing your development workflow, making it easier to work on more than one project at a time without introducing conflicts in their dependencies. For more information please refer to the documentation: http://ift.tt/1pumjtG Commands available: add2virtualenv: add directory to the import path allvirtualenv: run a command in all virtualenvs cdproject: change directory to the active project cdsitepackages: change to the site-packages directory cdvirtualenv: change to the $VIRTUAL_ENV directory cpvirtualenv: duplicate the named virtualenv to make a new one lssitepackages: list contents of the site-packages directory lsvirtualenv: list virtualenvs mkproject: create a new project directory and its associated virtualenv mktmpenv: create a temporary virtualenv mkvirtualenv: Create a new virtualenv in $WORKON_HOME rmvirtualenv: Remove a virtualenv setvirtualenvproject: associate a project directory with a virtualenv showvirtualenv: show details of a single virtualenv toggleglobalsitepackages: turn access to global site-packages on/off virtualenvwrapper: show this help message wipeenv: remove all packages installed in the current virtualenv workon: list or change working virtualenvs
I use pretty much two commands: mkvirtualenv
and workon
. All the virtual environments are created under ~/.virtualenvironments
.
Here is how to create a new virtual environment:
~ > mkvirtualenv venv New python executable in venv/bin/python2.7 Also creating executable in venv/bin/python Installing setuptools, pip...done. (venv)~ >
This is similar to virtualenv, but you don't specify a directory, just a name. Your new environment is here:
(venv)~ > tree -L 2 ~/.virtualenvs/venv/ /Users/gigi/.virtualenvs/venv/ ├── bin │ ├── activate │ ├── activate.csh │ ├── activate.fish │ ├── activate_this.py │ ├── easy_install │ ├── easy_install-2.7 │ ├── get_env_details │ ├── pip │ ├── pip2 │ ├── pip2.7 │ ├── postactivate │ ├── postdeactivate │ ├── preactivate │ ├── predeactivate │ ├── python -> python2.7 │ ├── python2 -> python2.7 │ └── python2.7 ├── include │ └── python2.7 -> /usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/include/python2.7 └── lib └── python2.7
To switch between environments, you use the workon
command, which without arguments just lists all the virtual environments. I have quite a few:
(venv)~ > workon acme_server conman curr-gen nupic over-achiever pandas prime_hunter pypy quote-service venv work (venv)~ > workon conman (conman) ~ >
Virtualenv-Burrito
Virtualenv-Burrito is a project to install both virtualenv and virtualenvwrapper in one command.
Python 3 Venv
The venv module was added to Python 3.3 and provides built-in virtual environment creation and management just like virtualenv. The command to create virtual environments is pyenv
. Otherwise it is pretty similar to virtualenv.
Conda
Virtual environments are very useful for isolating dependencies between different projects. But that doesn't extend to native libraries. Many C extensions depend on particular versions of native libraries, and those can't be virtual environment specific.
Conda can address this problem. It is a package management system that handles all dependencies, not just Python dependencies. It can even be used for packaging non-Python software.
Alternatives to Virtual Environments
Do you have to use virtual environments? Not really. If for some reason you're not fond of the magic of virtual environments, there are other options.
Manually Vendorize
The functionality of a virtual environment is pretty simple. If you need total control, you can just do it yourself and copy exactly the subset of tools and packages you want into a target directory structure, set up some environment variables, and you're good to go.
VM or Dockerized System Python
If you bake your applications into a docker container or a cloud image then there will be just one project, and you may not need a virtual environment in the middle. You can just build on top of the system Python, being sure it will not be changed.
Tox
If all you care about is testing your code under different interpreters and environments then Tox can do all the heavy lifting for you. It will still use virtual environments under the covers, but you don't have to deal with them.
Best Practices
There are some packaging and virtual environment best practices that have emerged over time for robust Python systems.
Pin Versions in Your Requirements Files
Pinning means specifying precisely the versions of your dependencies. If a new version comes out and you install your application on a new server, you'll still use the version you tested against and that works, and not the latest and greatest. There is a downside here—you'll have to explicitly upgrade versions if you want to keep up with the progress made by your dependencies—but it is usually worth it.
Never Use the System Python
Relying on the system version is bad practice because there are other tools that rely on it, and if you start upgrading system packages, you may break them. You may also be affected by security updates that modify system packages, or in general if you want to upgrade your OS you may find that the system Python is now different.
Use a Private Package Index When Baking Images
PyPI may be down. If you need to bake a new image and can't access PyPI, you're in trouble. Devpi is a good option to avoid frustration.
Conclusion
There are many options for managing multiple Python projects on the same machine without conflicts. Figure out which option is best for you and use it. It is fast and easy to create virtual environments. Don't hesitate to take advantage of this useful tool. If you have special requirements, there are many solutions too.
by Gigi Sayfan via Envato Tuts+ Code
Domain-Driven Design in Laravel 5
The process of software development is complicated. When we face problems, we usually try to tackle the complexity by turning it into more understandable and manageable pieces.
Domain-Driven Design is a software development methodology for tackling complex software projects to deliver an end-product that meets the goals of the organization. In fact, Domain-Driven Design promotes focusing the project on an evolving core model.
Domain-Driven Design was written by Eric Evans in 2003. It is a combination of widely accepted best practices along with Evans’s own insights and experiences. If you are new to the idea of Domain-Driven Design, there is a lot to learn in this book.
It will teach you how to effectively model the real world in your application and use OOP to encapsulate the business logic of the organization. This is how Martin Fowler [Biggest software Artisan Lecture] describes it:
In his excellent book Domain Driven Design, Eric Evans creates a classification of the different kinds of domain objects that you're likely to run into.
What Is a Domain Model?
In my opinion, a Domain Model
is your perception of the context in which it applies. Let me describe it a bit more. Domain
itself means the world of the business you are working with and the problems they want to solve. For example, if you want to develop an app for online food delivery, your domain will be everything (problems, business rules, etc.) about online food delivery that needs to be done in your project.
Model
means your solutions to the problems of Domain. Exactly, in fact, Model is in the brain of the business expert. Domain is not a chart, UML, code or a diagram; Model is the Idea of how to draw diagrams. The Model should be focused on knowledge around a specific problem that is simplified and structured to provide a solution.
The Domain Model is your structured solution to the problem. The Domain Model should represent the vocabulary and key concepts of the problems of domain.
Ubiquitous Language
Ubiquitous Language is the language that is used by business experts to describe the Domain Model. It means that the development team uses the language consistently in all communications, and also in the code. This language should be based on the Domain Model. Here's the definition of ubiquitous language by Eric Evans:
By using the model-based language pervasively and not being satisfied until it flows, we approach a model that is complete and comprehensible, made up of simple elements that combine to express complex ideas.
Let me give an example:
$product = new Entity/Product(); $product->setTitle( new Title('Mobile Phone')); $product->setPrice( new Price('1000')); $this->em->persist($product); $this->em->flush();
In the code above, I create a new product, but in the application the product must be added, not created:
//add is a static method in product class $product = Product::add( new Title('Mobile Phone'), new Price('1000') );
In a development team, if someone creates a product and someone else adds a product, this will violate ubiquitous language. In this case, if in the product add
method we have some extra actions such as sending email, they all will be missed, and also the definition of adding a product in the team will change. We would have two different definitions for one term.
Layered Architecture
In this article, I am not planning to talk about object-oriented design. But Domain-Driven Design proposes the fundamentals of a good design. Eric Evans believes:
Developing a good domain model is an art.
In order to develop a good domain model, you need to know about Model-Driven Design. The connection of the model and the implementation is called Model-Driven Design. The Layered Architecture is one of the Model-Driven Design blocks.
In order to tackle complexity, we need to separate concerns, i.e. isolate every part of the design to concentrate more on specific sections. The system must be able to interact between sections. Layered Architecture is the idea of isolation of each part based on years of experience and convention. The Layers are listed below:
- User Interface
- Application Layer
- Domain Layer
- Infrastructure Layer
Let's describe each layer with details.
The User Interface
is responsible for showing information to the user and interpreting the user’s commands. In Laravel, the view is the User Interface (presentation) Layer.
The Application Layer
is the way we communicate with the world outside (out of the application domain). This layer behaves as a public API for our application. It does not contain business rules or knowledge. Controllers in Laravel are located here.
The Domain Layer
is the heart of business software. This layer is responsible for representing the concepts of the business, information about the business situation, and business rules.
The Infrastructure Layer
provides generic technical capabilities that support the higher layers and also support the pattern of interactions between the four layers (that is why repositories are located in this layer).
Let me describe it a bit more. For example, if we are an email company (such as Gmail), the email would be in the model layer, but if we are a food delivery company, email would be in our infrastructure layer. Why? Because sending email for food delivery is not vital.
Connection between layers is mandatory, but without losing the benefit of the separation. The connection would be in one direction. As you see in the schema above, the upper layers can communicate with lower layers. If the lower layers need to connect with an upper layer, they must use patterns such as callbacks or observers.
Implementing Domain-Driven Design in Laravel
If you are inspired to see how Domain-Driven Design could be applied to a project that is being built with Laravel 5 as the underlying framework, I will describe the folder structure in this article and come up with other definitions in follow-up articles. In order to create the folder structure, we need to create a folder structure in our app
folder of the root directory, as shown below:
├── Presentation ├── Application ├── Domain ├── Infrastructure
Presentation
is the view, where we put our blade files. Application
is where we put our controllers. Laravel 5 lets us configure the namespace of our controller in App/Providers/RouteServiceProvider.php
:
protected $namespace = 'App\Application\controllers';
And also you can configure the path of view in config/view.php
:
//... 'paths' => [ realpath(base_path('App/Presentation')), //...
Now we have just configured the folder structure of Laravel. If you're wondering where Laravel's default model would be located, I should mention that a real repository in Domain-Driven Design must work with a simple PHP object as a database object, and Laravel Eloquent (Active Record) is a God object (big ball of mud). Also, to have good aggregates, we need to use Datamapper (Doctrine ORM) instead of Active Record.
Conclusion
This article is one of a series on Domain-Driven Design in Laravel. Laravel needs some extra packages and configuration to be adopted with Domain-Driven Design.
As you can see, I have presented the core concepts, but I believe it may be hard to grasp it alone. I will cover more in the next articles of this series.
by Alireza Rahmani Khalili via Envato Tuts+ Code
Simple Grid
by Rob Hope via One Page Love