Teasing website for the next RecordRecord Filet Mignon
by csreladm via CSSREEL | CSS Website Awards | World best websites | website design awards | CSS Gallery
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Teasing website for the next RecordRecord Filet Mignon
Galaxia.co is a concept site that demonstrate the essence of a new social app, an app built upon a strong conceptual theory and philosophy.
A Green arTI é uma empresa focada nos processos de inclusao digital. Ajudamos as empresas a desenvolver uma estrategia estruturada e planejada, de forma a poderem obter o melhor retorno possivel.
La missione è garantire un vantaggio nei confronti dei competitors. Questo può manifestarsi attraverso la UXD e la cura dei contenuti.
Granny\’s Secret is a premium brand of all natural fine foods and fruit juices.
Professional portfolio of Digital & Visual designer Matteo Orilio. Currently, based in Napoli, Italy.
Welcome to our weekly edition of what’s hot in social media news. To help you stay up to date with social media, here are some of the news items that caught our attention. What’s New This Week Snapchat Now Serves 10 Billion Daily Video Views: “Now users are watching 10 billion videos a day on [...]
This post Snapchat Delivers 10 Billion Video Views Daily: This Week in Social Media first appeared on .
- Your Guide to the Social Media Jungle
Design House is an award-winning digital creative & website design agency with a team of pioneer designers, coders and digital strategists obsessed with UX & Design. They are headquartered in Miami-Coral Gables, Florida, and partner with businesses a
Change is inevitable, and change is exciting. There is a time when change is necessary, imminent even. And after 7 years of being the best at what we do, we felt our time for change had come.
Best jQuery plugin to validate form fields Designed for Bootstrap.
The post FormValidation : jQuery Powerful Form Validation with Bootstrap appeared first on jQuery Rain.
Jquery.barrager.js is an elegant web barrage plugin, support for displaying images, text and hyperlinks. Support speed, height, color, quantity custom.
The post Barrager.js : jQuery Animated Notification Marquee appeared first on jQuery Rain.
This article is an introduction to the advanced waters of testing AntiPatterns in Rails. If you are a rather new to Test-Driven-Development and want to pick up a couple of very valuable best practices, this article was exactly written for you.
describe Mission do let(:james_bond) { build_stubbed(:agent, name: 'James Bond', number: '007') } let(:mission) { build_stubbed(:mission, title: 'Moonraker') } ... end
The let
helper method in RSpec is very frequently used for creating instance variables that are available between multiple tests. As an eager student of TDD practices, you have probably written your fair share of these, but following this practice can easily lead to having lots of mystery guests showing up—see below—which is definitely not something we need to have crashing our party!
This particular side effect of let
has gained a bit of a reputation for possibly causing increased test maintenance and inferior readability throughout your test suite. let
sure sounds enticing because it’s lazily evaluated and aids adhering to the usually zero-defect concept of DRY and all. Therefore it seems too good not to use on a regular basis. Its close cousin subject
should also be avoided most of the time.
It gets worse when you start nesting these things. let
statements plastered all over nested describe
blocks are an all-time favorite. I think it’s not unfair to call this a recipe for hanging yourself—quickly. More limited scope is generally easier to understand and follow.
We don’t want to build a house of cards with semi-global let fixtures that obscure understanding and increase the chances of breaking related tests. The odds of crafting quality code are stacked against us with such an approach. Extracting common object setup is also easier to do via plain old ruby methods or even classes if needed.
This let
creature is a widely shared fixture which will often need to be deciphered first before you know exactly what business this object has in your tests. Also, going back and forth to understand what exactly they are made of and which relationships they have via associations can be a time-consuming pain.
The clarity of these details in your test setup usually helps a lot to tell other developers all they need to work with every particular part of your test suite—don’t forget your future self! In a world where you never have to revisit particular tests and even never refactor parts of your test suite, that might not matter as much—but that is a pipe dream for now!
We want to have as few collaborators and as little data as possible for each test. let
works against you on that front as well. These let fixtures can amass a lot of attributes and methods that make them way too big as well.
If you start going down the let road, you will often end up with pretty fat objects that try to make a lot of tests happy at the same time. Sure you can create lots of variations of these let
thingies, but that makes the whole idea of them a bit irrelevant, I think. Why not go one step further, avoid let, and rely on Ruby without RSpec DSL magic?
I’m more in the camp of being on the side of repeated setup code for each test than being overly DRY, obscure or cryptic in my test suite. I’d always go for more readability. The test method should make clear the cause and effect of its involved pieces—using object collaborators that are possibly defined far away from your test exercise is not in your best interest. If you need to extract stuff, use expressive methods that encapsulate that knowledge.
These are pretty much always a safe bet. That way you can also supply the setup that you actually need for each test and not cause slow tests because you have unnecessary data involved. Good old variables, methods and classes are often all you need to provide faster, stable tests that are easier to read.
Mystery Guests are RSpec DSL Puzzles, really. For a while, the various objects defined via RSpec DSL let
are not that hard to keep in check, but soon, when the test suite grows, you invite a lot of mysterious guests into your specs. This gives your future self and others unnecessary context puzzles to solve.
The result will be obscure tests that require you to go into full Sherlock Holmes mode. I guess that sounds way more fun than it is. Bottom line, it’s a waste of everybody’s time.
Mystery Guests pose two problematic questions:
describe Mission do let(:agent_01) { build_stubbed(:agent, name: 'James Bond', number: '007') } let(:agent_02) { build_stubbed(:agent, name: 'Moneypenny', number: '243') } let(:title) { 'Moonraker' } let(:mission) { build_stubbed(:mission, title: title) } mission.agents << agent_01 << agent_02 ... ... ... #lots of other tests describe '#top_agent' do it 'returns highest ranking agent associated to a mission' do expect(mission.top_agent).to eq('James Bond') end end end
This describe block for #top_agent
lacks clarity and context. What agent is involved and what mission are we talking about here? This forces developers to go hunting for objects that are suddenly popping up in your tests.
Classic example of a mystery guest. When we have lots of code between the relevant test and the origin of these objects, you increase the chances of obscuring what’s going on in our tests.
The solution is quite easy: You need fresh “fixtures” and build local versions of the objects with exactly the data that you need—and not more than that! Factory Girl is a good choice for handling this.
This approach can be considered more verbose, and you might be duplicating stuff sometimes—extracting stuff into a method is often a good idea—but it’s a lot more expressive and keeps tests focused while providing context.
describe Mission do #... #... #... #lots of other tests describe '#top_agent' do it 'returns a list of all agents associated to a mission' do agent_01 = build_stubbed(:agent, name: 'James Bond', number '007') agent_02 = build_stubbed(:agent, name: 'Moneypenny', number '243') mission = build_stubbed(:mission, title: 'Moonraker') mission.agents << agent_01 << agent_02 expect(mission.top_agent).to eq('James Bond') end end end
The example above builds all the objects needed for our tests in the actual test case and provides all the context wanted. The developer can stay focused on a particular test case and does not need to “download” another—possibly totally unrelated—test case for dealing with the situation at hand. No more obscurity!
Yes, you are right, this approach means that we are not achieving the lowest level of duplication possible, but clarity in these cases is much more important for the quality of your test suite and therefore for the robustness of your project. The speed in which you can effectively apply changes to your tests also plays a role in that regard.
Another important aspect of testing is that your test suite not only can function as documentation but absolutely should! Zero duplication is not a goal that has a positive effect for specs documenting your app. Keeping unnecessary duplication in check is nevertheless an important goal to keep sight of—balance is king here!
Below is another example that tries to set up everything you need locally in the test but also fails because it’s not telling us the full story.
... context "agent status" do it "returns the status of the mission’s agent" do double_o_seven = build_stubbed(:agent) mission = build_stubbed(:mission, agent: double_o_seven) expect(mission.agent_status).to eq(double_o_seven.status) end end
We are creating a generic agent. How do we know it’s 007? We are also testing for the agent’s status but it’s also nowhere to be found—neither in the setup nor explicitly during the verify phase in our expect
statement. The relationship between the double_o_seven.status
and the mission status could be confusing since it’s coming out of nowhere really. We can do better:
... context "agent status" do it "returns the status of the mission’s agent" do double_o_seven = build_stubbed(:agent, name: 'James Bond', status: 'Missing in action')) mission = build_stubbed(:mission, agent: double_o_seven) expect(mission.agent_status).to eq('James Bond: Missing in action') end end
Again, here we have all we need to tell a story. All the data we need is right in front of us.
So, you have started to get into Test-Driven-Development, and you've started to appreciate what it offers. Kudos, this is great! I’m sure that neither the decision to do it nor the learning curve to get there were exactly a piece of cake. But what often happens after this initial step is that you try hard at having full test coverage and you start to realize that something is off when the speed of your specs start to annoy you.
Why is your test suite getting slower and slower although you think you are doing all the right things? Feeling a bit punished for writing tests? Slow tests suck—big time! There are a couple of problems with them. The most important issue is that slow tests lead to skipping tests in the long run. Once you are at a point where your test suite takes forever to finish, you will be much more willing to think to yourself: “Screw this, I’ll run them later! I got better things to do than waiting for this stuff to finish.” And you are absolutely right, you have better things to do.
The thing is, slow tests are more likely to welcome in compromises in the quality of your code than may be obvious at first. Slow tests also fuel people’s arguments against TDD—unfairly so, I think. I don’t even want to know what non-technical product managers have to say if you regularly have to step outside for a nice long coffee break just to run your test suite before you can continue your work.
Let’s not go down that road! When you only need a little time to exercise your tests and as a result get super quick feedback cycles for developing each step of new features, practicing TDD becomes a lot more attractive and less of an argument. With a little bit of work and care along the way, we can avoid slow-mo tests quite effectively.
Slow tests are also a killer for getting into the “zone”. If you get taken out of the flow this frequently in your process, the quality of your overall work might also suffer by having to wait for slow tests to return from an expensive round trip. You want to get as much “in-the-zone time” as possible—unbearably slow tests are major flow killers.
Another issue worth mentioning in this context is that this might lead to having tests that cover your code, but because you won’t take time to finish exercising the whole suite, or write tests after the fact, your apps’ design won’t be driven by tests anymore. If you are not on the Test-Driven hype train, this might not bother you much, but for TDD folks, that aspect is essential and should not be neglected.
Bottom line, the faster your tests, the more you will be willing to exercise them—which is the best way to design apps as well as to catch bugs early and often.
What can we do to speed up tests? There are two speeds that are important here:
That does not mean that you should avoid it all costs. Often you don’t need to write tests that exercise the database, and you can trim off a lot of time that your tests need to run. Using just new
to instantiate an object is often sufficient for test setups. Faking out objects that are not directly under test is another viable option.
Creating test doubles is a nice way to make your tests faster while keeping the collaborating objects you need for your setup super focused and lightweight. Factory Girl also gives you various options to smartly “create” your test data. But sometimes there is no way around saving to the database (which is a lot less often than you might expect), and this is exactly where you should draw the line. Any other time, avoid it like hell and your test suite will stay fast and agile.
In that regard you should also aim for a minimal amount of dependencies, which means the minimal amount of objects that you need collaborating to get your tests to pass—while saving as little as possible to the database along the way. Stubbing out objects—which are mere collaborators and not directly under test—often also makes your setup easier to digest and simpler to create. A nice speed boost overall with very little effort.
This means that you want to have a majority of unit tests at the bottom of this hierarchy—which all focus on very specific parts of your application in isolation—and the smallest number of integration tests at the top of this pyramid. Integration tests simulate a user going through your system while interacting with a bunch of components that are exercised around the same time.
They are easy to write but not so easy to maintain—and the speed losses are not worth going the easy route. Integration tests are pretty much the opposite of unit tests in regard to being high level and sucking in a lot of components that you need to set up in your tests—which is one major reason why they are slower than unit tests.
I guess this makes it clear why they should be at the top of your testing pyramid to avoid significant speed losses. Another important issue here is that you want to have as little overlap between these two test categories as possible—you ideally want to test things only once, after all. You can’t expect to have perfect separation, but aiming for as little as possible is a reasonable and achievable goal.
In contrast to unit tests, you want to test as few details as possible with integration tests. The inner mechanics should already be covered by extensive unit tests. Focus instead only on the most essential parts that the interactions need to be capable of exercising! The other main reason is that a webdriver needs to simulate going through a browser and interacting with a page. This approach fakes out nothing or very little, saves the stuff to the database and really goes through the UI.
That’s also one reason they can be called acceptance tests because these tests try to simulate a real user experience. This is another major speed bump that you want to exercise as little as possible. If you have a ton of these tests—I guess more than 10% from your overall number of tests—you should slow down and reduce that number to the minimum amount possible.
Also, keep in mind that sometimes you don’t need to exercise the whole app—a smaller, focused view test often does the trick as well. You will be much faster if you rewrite a couple of your integration tests that just test a little bit of logic that does not necessitate a full integration check. But don’t get into writing a ton of them either; they offer the least bang for the buck. That being said, integration tests are vital to the quality of your test suite, and you need to find a balance of being too stingy applying them and not having too many of them around.
Quick feedback and fast iteration cycles are key to designing your objects. Once you start to avoid running these tests frequently, you are losing this advantage—which is a big aid for designing objects. Don’t wait until your Continuous Integration service of choice kicks in to test your whole application.
So what’s a magic number we should keep in mind when running tests? Well, different people will tell you different benchmarks for this. I think that staying under 30 seconds is a very reasonable number that makes it very likely to exercise a full test on a regular basis. If you leave that benchmark more and more behind, some refactoring might be in order. It will be worth it and it will make you feel much more comfortable because you can check in more regularly. You will most likely move forward a lot faster too.
You want that dialog with your tests to be as fast as possible. Tightening this feedback cycle by using an editor that can also exercise your tests is not to be underestimated. Switching back and forth between your editor and your terminal is not the best solution to handle this. This gets old very quickly.
If you like using Vim, you have one more reason to invest some time in becoming more efficient at using your editor. There are lots of handy tools available for Vim peeps. I remember that Sublime Text also offers to run tests from within the editor, but other than that, you need to do a little bit of research to find out what your editor of choice is capable of in that regard. The argument that you will hear frequently from TDD enthusiasts is that you don’t want to leave your editor because overall you will be spending too much time doing that. You want to stay much more in the zone and not lose the train of thought when you can do this sort of thing via a fast shortcut from inside your code editor.
Another thing to note is that you also want to be able to slice the tests that you want to run. If you don’t need to run the whole file, it’s nice to run a single test or a block that focuses just on what you need to get feedback on right now. Having shortcuts that help you run single tests, single files or just the last test again saves you a ton of time and keeps you in the zone—not to mention the high degree of convenience and feeling super dandy cool as well. It’s just amazing how awesome coding tools can be sometimes.
One last thing for the road. Use a preloader like Spring. You will be surprised how much time you can shave off when you don’t have to load Rails for every test run. Your app will run in the background and does not need to boot all the time. Do it!
I’m not sure if fixtures are still an issue for newbies coming to Ruby/Rails land. In case nobody instructed you about them, I’ll try to get you up to speed in a jiffy on these dreaded things.
ActiveRecord database fixtures are great examples of having tons of Mystery Guests in your test suite. In the early days of Rails and Ruby TDD, YAML fixtures were the de facto standard for setting up test data in your application. They played an important role and helped move the industry forward. Nowadays, they have a reasonably bad rep though.
Quartermaster: name: Q favorite_gadget: Broom radio skills: Inventing gizmos and hacking 00Agent: name: James Bond favorite_gadget: Submarine Lotus Esprit skills: Getting Bond Girls killed and covert infiltration
The hash-like structure sure looks handy and easy to use. You can even reference other nodes if you want to simulate associations from your models. But that’s where the music stops and many say their pain begins. For data sets that are a bit more involved, YAML fixtures are difficult to maintain and hard to change without affecting other tests. I mean, you can make them work, of course—after all, developers used them plenty in the past—but tons of developers will agree that the price to pay for managing fixtures is just a bit stingy.
One scenario we definitely want to avoid is changing little details on an existing fixture and causing tons of tests to fail. If these failing tests are unrelated, the situation is even worse—a good example of tests being too brittle. In order to “protect” existing tests from this scenario, this can also lead to growing your fixture set beyond any reasonable size—being DRY with fixtures is most likely not on the table anymore at that point.
To avoid breaking your test data when the inevitable changes occur, developers were happy to adopt newer strategies that offered more flexibility and dynamic behaviour. That’s where Factory Girl came in and kissed the YAML days goodbye.
Another issue is the heavy dependency between the test and the .yml fixture file. Since the fixtures are defined in a separate .yml file, mystery guests are also a major pain waiting to bite you due to being obscure. Did I mention that fixtures are imported into the test database without running through any validations and don’t adhere to the Active Record life cycle? Yeah, that’s not awesome as well—from whatever angle you want to look at it!
Factory Girl lets you avoid all that by creating objects relevant to the tests inline—and only with the data needed for that specific case. The motto is, only define the bare minimum in your factory definitions and add the rest on a test-by-test basis. Locally (in your tests) overriding default values defined in your factories is a much better approach than having tons of fixture unicorns waiting to be outdated in a fixture file.
This approach is more scalable too. Factory Girl gives you plenty of tools to create all the data you need—as nuanced as you like—but also provides you tons of ammo to stay DRY where needed. The pros and cons are nicely balanced with this library, I think. Not dealing with validations is also not a cause for concern anymore. I think using the factory pattern for test data is more than pretty reasonable and is one major reason why Factory Girl was so well received by the community.
Complexity is a fast-growing enemy that YAML fixtures are hardly equipped to take on effectively. In some way, I think of fixtures as let
on steroids. You are not only placing them even further away—being in a separate file and all—you are also potentially preloading way more fixtures than you might actually need. RIP!
If changes in your specs lead to seemingly unrelated failures in other tests, you are likely looking at a test suite that has become fragile due to causes mentioned above. These often puzzle-like, mystery-guest-infested tests easily lead to an unstable house of cards.
When objects necessary for tests are defined “far away” from the actual test scenario, it’s not that hard to overlook the relationships that these objects have with their tests. When code gets deleted, adjusted or simply the setup object in question gets accidentally overridden—unaware how this could influence other tests around—failing tests are not a rare encounter. They easily appear like totally unrelated failures. I think it’s fair to include such scenarios into the category of tightly coupled code.
describe Mission do let(:agent) { build_stubbed(:agent, name: 'James Bond', number: '007') } let(:title) { 'Moonraker' } let(:mission) { build_stubbed(:mission, title: title) } #... #... #... #lots of other tests describe '#joint_operation_agent_name' do let(:agent) { build_stubbed(:agent, name: 'Felix Leiter', agency: 'CIA') mission.agents << agent it “returns mission’s joint operation’s agent name” do expect(mission.joint_operation_agent_name).to eq('Felix Leiter') end end end
In this scenario, we have clearly modified locally an object’s state which was defined in our setup. The agent
in question is now a CIA operative and has a different name. mission
comes again out of nowhere as well. Nasty stuff, really.
It's no surprise when other tests that possibly rely on a different version of agent
start to blow up. Let’s get rid of the let
nonsense and build the objects we need again right where we test them—with only the attributes we need for the test case, of course.
describe Mission do #... #... #... #lots of other tests describe '#joint_operation_agent_name' do agent = build_stubbed(:agent, name: 'Felix Leiter', agency: 'CIA') mission = build_stubbed(:mission) mission.agents << agent it “returns mission’s joint operation’s agent name” do expect(mission.joint_operation_agent_name).to eq('Felix Leiter') end end end
It is important to understand how objects are related—ideally with the minimum amount of setup code. You don’t want to send other developers on a wild goose chase to figure this stuff out when they stumble over your code.
If it’s super hard to get a grasp quickly and a new feature needed to be implemented yesterday, these puzzles can not expect to be given the highest priority. This in turn often means that new stuff gets developed on top of that unclear context—which is a brittle basis for going forward and also super inviting for bugs down the road. The lesson to take away here is not to override stuff where possible.
A final useful tip for avoiding brittle tests is to use data attributes in your HTML tags. Just do yourself a favor and use them—you can thank me later. This lets you decouple the needed elements under test from the styling information that your designers might touch frequently without your involvement.
If you hard code a class like class='mission-wrapper'
in your test and a smart designer decides to change this poor name, your test will be affected unnecessarily. And the designer is not to blame, of course. How in the world would she know that this affects part of your test suite—very unlikely, at least.
<div class='mission data-role='single-mission'> <h2><% = @mission.agent_status %></h2> ... </div>
context "mission’s agent status" do it 'does something with a mission' do ... ... expect(page).to have_css '[data-role=single-mission]' end end
We expect to see some HTML element on a page and marked it with a data-role
. Designers have no reason to touch that, and you are protected against brittle tests that happen due to changes on the styling side of things.
It's a pretty effective and useful strategy that basically costs you nothing in return. The only thing that might be necessary is to have a short conversation with the designers. Piece of cake!
We want to avoid distracting people who will read our tests or, even worse, confuse them. That is opening the door for bugs but can also be expensive because it can cost valuable time and brain power. When you create your tests, try hard not to override things—it does not aid in creating clarity. More likely it will lead to subtle, time-consuming bugs and won’t affect the aspect of documenting your code positively.
This creates an unnecessary burden we can avoid. Mutating test data more than absolutely necessary is also worth being a bit paranoid about. Keep it as simple as possible! This really helps you avoid sending other developers or your future self on wild goose chases.
There is still a lot to learn about things you should avoid while testing, but I believe this is a good start. Folks who are rather new to all things TDD should be able to handle these few AntiPatterns right away before diving into more advanced waters.
Now that you've developed your amazing and ground-breaking app, it's time to make a return on the sizable investment of resources expended during the design and development.
App monetization can be tricky. You want to achieve the highest number of downloads possible, so keep the barrier to entry low by offering your app for free. Or you have to make up for the lack of download revenue with ads, subscriptions or another monetization option. These can all annoy your users if implemented without care.
By providing alternative monetization options, you have a better chance of making sales and receiving a high amount of downloads. Aside from traditional advertising and alternative app stores to sell your app, the following models have been successful at driving revenue without ads.
Continue reading %3 Alternative App Monetization Options to Paid Downloads%
In my article SQL vs NoSQL: The Differences I mentioned the line between SQL and NoSQL databases has become increasingly blurred with each camp adopting features from the other. MySQL 5.7 InnoDB databases and PostgreSQL 9.4 both directly support JSON document types in a single field. In this article we'll examine MySQL's JSON implementation in more detail.
(PostgreSQL supported JSON before version 9.4 and any database will accept JSON documents as a single string blob. However, both now directly support validated JSON data in real key/value pairs rather than a basic string.)
…it doesn't follow you should.
Normalisation is a technique used to optimize the database structure. The First Normal Form (1NF) rule governs that every column should hold a single value -- which is clearly broken by storing multi-value JSON documents.
If you have clear relational data requirements, use appropriate single-value fields. JSON should be used sparingly as a last resort. JSON value fields cannot be indexed so avoid using it on columns which are updated or searched regularly. In addition, fewer client applications support JSON and the technology is newer and possibly less stable than other types.
That said, there are good JSON use-cases for sparsely-populated data or custom attributes.
Consider a shop selling books. All books will have an ID, ISBN number, title, publisher, number of pages and other clear relational data. Presume we want to add any number of category tags to each book. We could achieve this in SQL using:
It'll work but it's cumbersome and considerable effort for a minor feature. Therefore, we'll define a tags JSON field in our MySQL database's book table:
[code language=sql]
CREATE TABLE `book` (
`id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
`title` varchar(200) NOT NULL,
`tags` json DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
[/code]
Continue reading %How to Use JSON Data Fields in MySQL Databases%
I am really excited about Laravel Spark. By the time you read this, there will probably be a multitude of posts explaining how you can set it up. That's not as interesting to me as the journey I'm about to take in creating an actual business with Spark!
The idea is simple. I have created a Pagekit module which you can use to back up and restore site data. The module makes it easy to store and download these backups, and restore them on different servers.
The trouble is, getting those backup files to the remote server takes time and a bit of hassle. I have often wanted a way to quickly and painlessly transfer this application state from one server to another, and make automated offsite backups. So I'm going to set that up for myself, and perhaps others will find it useful enough to pay for it.
I'm using Stripe, and intend to have a single plan with no trial. The setup for this is quite easy, but I've made a note of the plan ID. I'll need that to set the plan up in Spark...
Next, I reset my secret and public Stripe keys and update to the latest API (through the same screen, http://ift.tt/1rnQsJq).
I forgot that the settings in .env
do not automatically reload while the Laravel development server is running, so I was getting needlessly frustrated at the keys which wouldn't seem to update.
Spark has a few expected registration/profile fields, but I want to add a few more. I would like to ask users if they would like automatic backups and I'd also like to collect their billing address, so I can show it on their invoice. First I'll have to create a migration for the new field:
php artisan make:migration add_should_backup_field
To do this, we can add the column (making sure to remove it if the migrations are rolled back):
use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;
class AddShouldBackupField extends Migration
{
public function up()
{
Schema::table("users", function (Blueprint $table) {
$table->boolean("should_backup");
});
}
public function down()
{
Schema::table("users", function (Blueprint $table) {
$table->dropColumn("should_backup");
});
}
}
Continue reading %Starting a Business with Laravel Spark%
Have you heard of Google AMP? Want to know how it will impact your blog? To discover more about Google AMP and the future of blogging, I interview Leslie Samuel. More About This Show The Social Media Marketing podcast is an on-demand talk radio show from Social Media Examiner. It’s designed to help busy marketers [...]
This post Google AMP: What Bloggers Need to Know first appeared on .
- Your Guide to the Social Media Jungle
Listy is a jQuery plugin which aim to facilitate developer making lists browsable through keyboard.
The post Listy : jQuery plugin for Lists browsable through Keyboard appeared first on jQuery Rain.
With XSiteBuilder, We can create websites by drag and drop, We can publish that site to any hosting server via ftp. We create websites faster that will save a tons of time working.
The post XSiteBuilder : Drag Drop Site Builder with PHP & jQuery appeared first on jQuery Rain.
A jQuery plugin to transform standard HTML radio buttons and checkboxes (with title attributes) into easily clickable elements.
The post zInput : jQuery Custom Radio buttons & Checkboxes Plugin appeared first on jQuery Rain.
After taking a break in March, we are back with Sourcehunt Design for a little spring cleaning! This month, unlike before, we are going to focus solely on 2 major open source projects exclusively: Mozilla and Fedora.
Both projects are major players in the open source world, yet have some of the healthiest communities among open source projects, which leads us also to the design aspect here. Both projects are quite inclusive for new contributors to get involved.
Let's have a look.
Design at Mozilla has been usually an employee specific field for the Firefox creators. It has always been kind of unofficial for volunteers to do design work, and that most of the times happened when approached directly by employees.
However, this has changed in the recent months when the Community Design repo has been introduced. With its inception, both employees and volunteers from Mozilla projects could request design help with projects of theirs or chime in to help other contributors with their requests. I have recently wrote on my blog about the initiative also.
Let's have a look at this process and how you can get involved in a Mozilla project.
You can find the Mozilla Community Design repo on GitHub, part of the Mozilla organization. You will be greeted with an introduction on how the design processes work. A little below, you will find the template for filing issues, so all needed details are included properly when filing requests.
Make sure to check out the tutorial in case you stumble upon any issues (no pun intended).
However, we are going to focus here on contributing to design at Mozilla, not requesting design help, so let's have a look at a good example of a design issue from Mozilla, which I've personally completed recently. The Tranvision team is in need of a logo for their software, which is a translation memory web application created by the French Mozilla community and now maintained by both staff from Mozilla and volunteers.
Pascal Chevrel has created a very clear brief on how he envisions the logo, yet is also giving enough creative freedom to any designer who wants to take this on. Some back and forth discussions helped resolve the request and the logo can be seen live on the Tranvision website. It's that simple!
After you finish a request, chances are you will be asked for the final exported files. Feel free to link to them in the corresponding GitHub issue. However, in true open source fashion, push them also to a new folder in the repo itself (or create a pull request for it). Here is a good example of the Mozilla Netherlands logo I created.
Once a month, the Community Design group meets on Vidyo, Mozilla's organization-wide video-conferencing tool. Everyone is free to join, no matter if already contributed or not. This, in fact, is a very rewarding experience itself, as Mozilla's Creative Team is usually present as well. The meeting happens every second Thursday of the month, at 5:15 to 6PM UTC. You can find the meeting notes on the meeting's Etherpad.
Furthermore, there is a very helpful video if you need more help on getting involved in the GitHub repo. Feel free to also add more questions on the Mozilla Discourse if you are stuck at some point.
There is also a public Telegram group, where members chat.
As a popular open source Linux-based operating system, designed to offer a secure and general purpose experience. Fedora is said to be the second-most commonly used Linux distribution, after Ubuntu. There are over a hundred distributions based on Fedora, including Red Hat Enterprise Linux (RHEL)which also sponsors the Fedora Project.Here is a broader overview of what Fedora stands for.
Unlike the rather new Community Design initiative at Mozilla, Fedora's Design processes are rather established for quite some time now. As one of the major Linux distributions, Fedora prides itself on being a FOSS distribution that focuses on innovation and close work with upstream Linux communities. This can be also seen on the Fedora Design team: design processes work completely in the open, with a transparent issue tracker, biweekly meetings, a wiki and similar.
However, it is to be noted that Fedora officially uses Free/ Open Source Software for their design needs as well. That means that instead of Adobe Photoshop or Illustrator, GIMP or Inkscape is used for all design related tasks. This is not a strict requirement though in the beginning, as the Fedora Design team is welcoming new contributors and assists them to get involved. Eventually, you will use GIMP and/or Inkscape in this process, though. I myself am a contributor at the Fedora Design Team, which helped me improve my Inkscape skills as well. Don't be afraid to give it a try!
The Fedora Account system is the organization-wide authentication system for everything Fedora. With a single account, you get access to all internal Fedora services and platforms (it's also based on free and open source software, so you don't need to worry about your privacy or security should you decided to disable your account at some point). Feel free to create one on the FAS website for the next steps.
The first resource you should look up, is the Fedora Design Team wiki page. Here you will find everything you need to get started in various contribution areas, including web design, mockups, artworks stickers and more.
One of the most interesting projects (to me at least) is Fedora Badges. 'Badges' is a playful rewarding system, which recognizes active Fedora contributors for certain tasks. The more tasks you complete within the Fedora Project, the more Badges you receive.
That being said, designing badges is a great low entry contribution path to Fedora Design. If you want your designs to be part of the 2nd most used Linux distributions, you should probably start here, with Badges.
To kick off, check out the Badges Tracker which advises you where to go next. The design resources are also extremely helpful (I find myself constantly coming back here), however, you should have a look at the Badge style guide first, before you get your hands dirty.
Continue reading %Sourcehunt Design April: How About Adding Fedora to Your CV?%
Instant visual diffing with CSS blend modes!
A handy little site built by Una Kravets that enables you to easily compare your development site against your production one. Oh and you can even test locally hosted addresses which is awesome.
The post Diffee appeared first on Web Design Weekly.
In June 2015, inventor of JavaScript and co-founder of Mozilla, Brendan Eich, announced something very exciting for the web: WebAssembly.
Eich explains that JavaScript has been dubbed as the assembly language of the web, something which he disagrees on and goes on to introduce WebAssembly, "a new intermediate representation for safe code on the Web", as he describes it. Google, Microsoft, Mozilla, Apple and some other folks have been experimenting on it before Eich's announcement.
WebAssembly, “wasm” for short, .wasm filename suffix, is an emerging standard whose goal is to define a safe, portable, size- and load-time efficient binary compiler target which offers near-native performance—a virtual CPU for the Web.
Why the need for WebAssembly? Well, asm.js requires the engines to optimize for it, making the parser the hot spot (literally - mobile devices can get really hot). This is due to the need for transport compression which also saves bandwidth, but decompressing it before parsing can be painful. Also, once browsers support the WebAssembly format natively, JavaScript and wasm can diverge, without introducing unsafe or inappropriate features into JavaScript just for use by compilers sourcing a few radically different programming languages.
Auth0 explains WebAssembly pretty well in this post, if you need a better overview.
WebAssembly is designed with several use cases in mind, inside and outside the browser. As you can guess, wasm can be used for image/video editing, AAA games in the browser, live augmentation, Virtual Reality and so much more. Pretty much everything that is already possible on the web, but with the potential to be faster and more efficient. But WebAssembly can be also useful outside the browser: server side applications, hybrid native apps, server side computing of untrusted code are just some of the potential applications.
The roadmap is also going well. In the last year, the WebAssembly Community Group has made a great amount of progress, producing:
Continue reading %Quick Tip: Try WebAssembly in Your Browser Today%
When we create mechanics and their affordances, we create rules. Our selections for what we include or omit define the physics of the space we’re creating. Take binary gender selection for example: why do most games, and many services, first confront you with that choice?
In this session we’ll chat with Erin about how the designs we create reflect the ontology of our thinking, which then reflects our ethics and philosophy, and how that shapes our work and defines the choices that we make in the future.
Erin Hoffman-John is the Chief Designer and cofounder of Sense of Wonder, an independent mobile developer of “smart fun” games.
Previously she led game design at GlassLab, a Bill and Melinda Gates and Macarthur Foundation supported three-year initiative to establish integrated formative assessment educational games. Her game credits include Mars Generation One: Argubot Academy, Kung Fu Panda World, GoPets, and others. She is also the author of a fantasy trilogy with Pyr Books.
For more information, visit www.erinhoffman.com, http://ift.tt/LBaz9e, and twitter @gryphoness
If you can’t make the live session but have questions, we’d love to collect them ahead of time and we’ll ask Erin on your behalf. You can submit your questions here. We’ll publish the responses (along with the full transcript) in the days following the session.
Here are a few question ideas to get you started:
These sessions run for approximately an hour and best of all, they don’t cost a cent. We are trialling a new format for this session, using a dedicated public Slack channel. That means that there is no audio or video, but a full transcript will be posted up on here in the days following the session.
The post Ask the UXperts: Inclusive by design — with Erin Hoffman-John appeared first on UX Mastery.
WordPress development has come a very long way in recent years when it comes to tooling. In the past, developing a WordPress website required some sort of MAMP/WAMP localhost setup and almost always, a rather painful headache. Maybe you’re even one of those developers who developed their website on a live environment - I was.
Luckily, times have changed and there are now tools that help take the headache and repetitiveness out of building WordPress sites on your computer.
In December last year, after 3 years of being almost completely devoid of any WordPress development, I became a full time WordPress developer again. Before that 3 year stint in the payments industry, I was a full time WordPress contractor.
Being out of an industry for 3 years, gave me a unique perspective on how fast things change in computing and more specifically, web development. WordPress development is no exception.
You see, when I returned to WordPress development in December last year, I decided to look at setting up the perfect WordPress development environment. I was pleasantly surprised to see that the tooling around WordPress had advanced so much that it was much like trading in a Ford for a Ferrari.
I was excited, and still am of course, to explore all the tools and in today’s article I’m going to share with you a summary of what I have learned. Hopefully it will help you tweak your current environment and implement some of the tools that are available to you.
To begin with, the most important piece in the WordPress development environment puzzle is the server. Without a server, we can’t do anything.
There are so many different options available today to host WordPress websites on your local environment that it gets tricky to know which one to use.
I’m going to suggest that you drop MAMP/WAMP/XAMP and start using a virtualized development environment.
Why? There are so many reasons:
Continue reading %The Ultimate WordPress Development Environment%
When you are looking for a place to sell your stuff, eBay is probably the first marketplace that comes to mind. It is the largest and probably best-known marketplace in the world, but this doesn't mean it is the best choice for you.
[author_more]
As good as eBay is, it's hard to ignore its disadvantages. It's crowded, sellers are frequently banned for little or no reason, disputes are settled in buyers' favor, and so on. All these reasons will certainly discourage potential sellers – who wants to invest their time and money in such an uncertain environment?
I myself considered selling on eBay but after a preliminary research, I totally gave up the idea. Instead, I started researching for alternatives and this is what inspired this article.
There are quite a lot of eBay alternatives and I recommend you try a few. As I frequently say, don't put all your eggs in one basket – i.e. you'd better sell on multiple marketplaces than focus all your resources on only one.
It's up to you (and the type of your product) to choose which ones to start with. The basic rule is that the larger marketplaces attract more buyers but the smaller marketplaces could prove a better option because generally there isn't that much competition.
In addition to listing your products on some marketplace, setting your own online store is always an option. However, it's hardly the easiest and cheapest one. While you certainly have more control with your own site, calculate all the money and effort you have to put into setting the store up, maintaining it, and promoting it in order to make it successful and it turns out an established marketplace is a better choice, at least in the beginning.
Here are some other eBay alternatives for you to consider.
While many will argue Amazon it's not an eBay alternative at all because it shows some of the symptoms that push sellers away — i.e. outrageous commissions, shops closed at whim, etc. — you can't deny Amazon is a huge marketplace where you can sell almost everything you can think of (provided it's legal, of course).
It's not an exaggeration to say Amazon is an institution. Don't expect to go to the site and start selling right away. There is a lot of stuff to read before you can start selling. For instance, you need to consider the type of account (Professional Seller, Vendor, Manufacturer and Distributor, etc) to open because there are a couple of them and each has different perks.
You also need to consider what to sell. In addition to a huge variety of products you can sell on many other sites, some of the unique aspects of Amazon are that you can sell services and
self-publish. None of the other eBay alternatives on the list offer this.
As for payment, shipping, and commissions, these vary depending on the product and the type of account. Here are the general rules for shipping and delivery and for payment, pricing and promotions. You might also want to check fulfillment by Amazon.
In other words, if you decide to sell on Amazon, be prepared to spend days or even weeks researching how the system works. It really offers huge opportunities but it's not for beginner sellers. If you are new to online sales, you'd better start with easier places.
eBid is another huge marketplace, though not as big as eBay or Amazon. It looks really promising! It's not new – it has been around since 1999 but it grew exponentially in the recent years. One of its great features is that you can import items from Amazon, eBay and other marketplaces, which coupled with its bulk upload functionality is a huge timesaver.
eBid is a universal marketplace. It has more than 13,000 categories of products across all product groups – from books and tech, to clothes and household items. You can also sell wholesale.
I love that commissions are stated clearly on its homepage and a potential seller doesn't have to browse through countless pages to get this vital data:
“It's always free to list and only 3% sales fee. Want 0% sales fee for life? Upgrade to seller+lifetime for just €49.99.”
eBid offers Seller and Seller+ accounts. As for payments, they work with PPPay, PayPal, Skrill, and of course credit cards. All in all, for most products eBid is the best eBay alternative.
Rakuten is another global marketplace. It's huge in Japan but it is popular in many other countries as well. In the past I made some affiliate sales for them but I have no personal experience as a seller there. Rakuten is a universal site with goods in any category you can think of – books, personal stuff, tech, household items, etc.
If you are looking for a cheap place to sell, Rakuten is not an option for you. Compared to sites where listings are free and commissions are small, Rakuten's pricing options are outrageous but if you manage to sell volumes there, it might turn out a better option than sites with no fees (and no buyers).
Similarly to some other big sites I didn't list, Rakuten is not open to international sellers. Here is what their terms state about eligibility:
“What are the requirements to sell?
Merchants on our site must have the following:
The choice of payment systems is up to you. As they state, “The majority of shops on our marketplace accept major credit cards (Visa, MasterCard, JCB, AMEX, Diners, etc), Paypal, Alipay, and bank transfers.” The same applies to shipment methods – you manage them individually, and you can use direct or indirect shipping.
Based on all this, my conclusion is that Rakuten doesn't compare well to the first two eBay alternatives but still it's a big marketplace and for you it might be an option after all.
Unlike the marketplaces so far, Etsy isn't a universal marketplace. Instead, it specializes in handmade and vintage stuff. In the beginning sellers were allowed to sell only stuff they personally made, but now they can use dropshipping. This means you can sell print-on-demand stuff made at sites such as Zazzle or CafePress, too.
As for product categories, as I already mentioned, Etsy isn't a universal marketplace. It has the following categories: Clothing & Accessories, Jewelry, Craft Supplies & Tools, Weddings, Entertainment, Home & Living, Kids & Baby, and Vintage.
Etsy has a $0.20 listing fee. A listing is active for 4 months or till the product sells. In theory the listing fee should reduce the amount of spam because when sellers have to pay a fee, they will upload only their best stuff. Yet the fee is affordable unless you upload millions of items that don't sell.
The site uses their own Direct Checkout payment system but in shops' descriptions I've seen sellers mention that they accept direct payments as well. If you go with direct payments, it's up to you to choose which payment systems to accept. With Direct Checkout you can get paid via credit and debit cards, PayPal, Google Wallet, Apple Pay, and Etsy Gift Cards.
When you sell a product, you are charged a 3.5% transaction fee and 4% + USD $0.30 for payment processing if you use Direct Checkout. If you don't use Direct Checkout in most cases you still have to pay payment processing fees but these vary depending on the service you are using.
Continue reading %7 eBay Alternatives for eCommerce Sellers%
With the advent of Single Page Applications (SPA) and mobile applications, APIs have come to the forefront of web development. As we develop APIs to support our SPA and mobile apps, securing the APIs has been a major pain area. Token based authentication is one of the most-favored authentication mechanisms, but tokens are prone to various attacks. To mitigate that, one has to implement ways to fix the issues, which often leads to one-off solutions that make tokens non-exchangeable between diverse systems. JSON Web Token (JWT) were created to implement standards based token handling and verification that can be exchanged between diverse systems without any issue.
JWTs carry information (called "claims") via JSON, hence the name JSON Web Tokens. JWT is a standard and has been implemented in almost all popular programming languages. Hence, they can be easily used or exchanged in systems implemented in diverse platforms.
JWTs are comprised of plain strings, so they can be easily exchanged in a URL or a HTTP header. They are also self-contained and carry information such as payload and signatures.
A JWT (pronounced 'JOT') consists of three strings separated by '.':
aaaaa.bbbbbbb.ccccccc
The first part is the header, second part is the payload, and third part is the signature.
The header consists of two parts:
Continue reading %An Introduction to Using JWT Authentication in Rails%
|