Drupal Feeds

To give more insight into Drupal Association financials, we are launching a blog series. This is the first one in the series and it is for all of you who love knowing the inner workings. It provides an overview of:

  • Our forecasting process
  • How financial statements are approved
  • The auditing process
  • How we report financials to the U.S. government via 990s

There’s a lot to share in this blog post and we appreciate you taking the time to read it.

Replacing Annual Budgets With Rolling Forecasts

Prior to 2016, the Drupal Association produced an annual budget, which is a common practice for non-profits. However, two years ago, we found that the Drupal market was changing quickly and that impacted our projected revenue. Plus, we needed faster and more timely performance analysis of pilot programs so we could adjust projections and evaluate program success throughout the year. In short, we needed to be more agile with our financial planning, so we moved to a rolling forecast model, using conservative amounts.

Using a rolling forecast means we don’t have a set annual budget. Instead, we project revenue and expense two years out into a forecast. Then, we update the forecast several times a year as we learn more. The first forecast of the year is similar to a budget. We study variance against this version throughout the year. As we conduct the additional forecasts during the year, we replace forecasts of completed months with actual expenses and income (“actuals”) and revise forecasts for the remaining months. This allows us to see much more clearly if we are on or off target and to adjust projections as conditions that could impact our financial year change and evolve. For example, if we learn that the community wants us to change a drupal.org ad placement that could impact revenue, we will downgrade the revenue forecast appropriately for this product.

In 2017, we there will be three forecasts:

  • December 2016:  The initial forecast was created. This serves as our benchmark for the year and we run variances against it.
  • May 2017: We updated the forecast after DrupalCon Baltimore since this event has the biggest impact on both our expenses and revenue for the year.
  • October 2017: We will reforecast again after DrupalCon Vienna. This is our final update before the end of the year and will be the benchmark forecast for 2018.

Creating and approving the forecasts is a multi-party process.

  1. Staff create the initial forecast much like you would a budget. They are responsible for their income and expense budget line items and insert them into the forecasting worksheet. They use historical financials, vendor contracts and quotes, and more to project the amount for each line item and document all of their assumptions. Each budget line manager reviews those projections and assumptions with me. I provide guidance and challenge assumptions and sign off on the inputs

  2. Our virtual CFO firm, Summit CPA, analyzes the data and provides financial insight including: Income Statement, Balance Sheet, Cash Flow, and Margin Analysis. Through these reports, we can see how we are positioned to perform against our financial KPIs. This insight allows us to make changes or strengthen our focus on certain areas to ensure we are moving towards those KPIs - which I will talk about in another blog post. Once these reports are generated, the Drupal Association board finance committee receives them along with the forecasting assumptions. During a committee meeting, the committee is briefed by Summit and myself. They ask questions to make sure various items are considered in the forecast and they provide advice for me to consider as we work to improve our financial health.  

  3. Once the committee reviews the forecast and assumptions, then, the full board reviews it in an Executive Session. The board asks questions and provides advice as well. This review process happens with all three forecasts for the year.

Approving Financial Reports

As we move through the year, our Operations Manager and CFO team work together to close the books each month. This ensures our monthly actuals are correct. Then, our CFO team creates a monthly financial report that includes our financial statements (Income Statement and Balance Sheet) for the finance committee to review and approve. Each month the finance committee meets virtually and the entire team reviews the most recently prepared report. After asking questions and providing advice, the committee approves the report.

The full board receives copies of the financial reports quarterly and is asked to review and approve the statements for the preceding three months. Board members can ask questions, provide advice, and approve the statements in Executive Session or in the public board meeting. After approval, I write a blog post so the community can access and review the financial statements. You can see an example of the Q3 2016 financial statement blog here. The board just approved the Q4 2016 financials and I will do a blog post shortly to share the financial statements.

Financial Audits

Every two or three years the Association contracts to have the financial practices and transactions audited.  For the years that we do not conduct a full audit, we will contract for a “financial review” by our CPA firm (which is separate from our CFO firm) to ensure our financial policies and transactions are in good order.

An audit is an objective examination and evaluation of the financial statements of an organization to make sure that the records are a fair and accurate representation of the transactions they claim to represent. It can be done internally by employees of the organization, or externally by an outside firm.  Because we want accountability, we contracted with an external CPA firm, McDonald Jacobs, to handle the audit.

The Drupal Association conducts audits for several reasons:

  1. to demonstrate our commitment to financial transparency.

  2. to assure our community that we follow appropriate procedures to ensure that the community funds are being handled with care.  

  3. to give our board of directors outside assurance that the financial statements are free of material misstatements.

What do the auditors look at?  For 2016, our auditors will generally focus on three points:

  • Proper recording of income and expense: Auditors will ensure that our financial statements are an accurate representation of the business we have conducted. Did we record transactions on the right date, to the right account, and the right class? In other words, if we said that 2016 revenue was a certain amount, is that really true?

  • Financial controls: Preventing fraud is an important part of the audit. It is important to put the kinds of controls in place that can prevent common types of fraud, such as forged checks and payroll changes. Auditors look to see that there are two sets of eyes on every transaction, and that documentation is provided to verify expenses and check requests.

  • Policies and procedures: There are laws and regulations that require we have certain policies in place at our organization. Our auditors will look at our current policies to ensure they were in place and, in some cases, had been reviewed by the board and staff.

The primary goal of the audit is for the auditor to express an opinion on two aspects of the financial statements of the Association: the financial statements are fairly presented, and they are in accordance with generally accepted accounting principles (GAAP). Generally accepted accounting principles are the accepted body of accounting rules and policies established by the accounting profession. The purpose of these rules is to promote consistency and fairness in financial reporting throughout the business community. These principles provide comparability of financial information.

Once our audit for 2016 is complete and approved by the board (expected in early summer), we can move to have the 990 prepared.  We look to have this item completed by September 2016.

Tax Filing: The Form 990

As a U.S.-based 501c3 exempt organization, and to maintain this tax-exempt status, the U.S. Internal Revenue Service (IRS) requires us to file a 990 each year. Additionally, this form is also filed with state tax departments as well. The 990 is meant for the IRS and state regulators to ensure that non-profits continue to serve their stated charitable activities. The 990 can be helpful when you are reviewing our programs and finances, but know that it’s only a “snapshot” of our year.  

You can find our past 990s here.

Here are some general points, when reviewing our 990.


Lines 8-12 indicates our yearly revenue revenue. Not only how much total revenue (line 12), but also where we have earned our income, broken out into four groups. Line 12 is the most important: total income for the year.

Lines 13-18 shows expenses for the year, and where we focused.

Cash Reserves are noted on lines 20-22 on page 1.

The 990 has a comparison of the net assets from last year (or the beginning of the year) and the end of the current year, as well as illustrates the total assets and liabilities of the Association.


Part II shows our expenditures by category and major function (program services, management and general, and fundraising).


In Part III, we describe the activities performed in the previous year that adhere to our 501c3 designation.  You can see here that Drupal.org, DrupalCon and our Fiscal Sponsorship programs are noted.


Part IV details our assets and liabilities. Assets are our resources that we have at our disposal to execute on our mission.  Liabilities are the outstanding claims against those assets.


Part V lists our board and staff who are responsible in whole or in part for the operations of an organization. These entries do include titles and compensation of key employees.


This section contains a number of questions regarding our operations over the year. Any “yes” answers require explanation on the following page.

Schedule A, Part II—Compensation of the Five Highest Paid Independent Contractors for Professional Services

We list any of our contractors, if we have paid them more than $50,000, on this schedule.

Once our 990 is complete and filed we are required to post the return publicly, which we do here on our website.  We expect to have the 2016 990 return completed, filed and posted by September 2017.

Phew. I know that was long. Thank you for taking the time to read all of the steps we take to ensure financial health and accuracy. We are thankful for the great team work that goes into this process. Most of all we are thankful for our funders who provide the financial fuel for us to do our mission work.

Stay tuned for our next blog in this series: Update on Q4 2016 financial (to follow up on our Q3 2016 financial update)

The benefits of Rich snippets and how to implement structured data in Drupal 8 to enhance the way your pages are listed by search engines.



In this post, you will learn how to create a custom date format for Drupal 7.

Read more

Setting a clear list of expectation to the client for a project delivery goes a long way to great client relationships. A mismatched and misunderstood project goal and target always leads to dissatisfaction among team members, account head, and all other stakeholders.

I manage a team of a few developers who build web applications in Drupal. While working on projects with my team, I have had the chance to practice a few of the points that I have mentioned in the article. It has not only kept us on track but also kept people happy and motivated.

What should you do? Be involved from the beginning

When you begin a project makes sure that you and your team members are involved in the project from the beginning. There are times when the team would expand…

Drupal Modules: The One Percent — Footermap (video tutorial) NonProfit Mon, 05/22/2017 - 11:54 Episode 28

Here is where we bring awareness to Drupal modules running on less than 1% of reporting sites. Today we'll investigate Footermap, a module which renders the results expanded menus in a block.

Start:  2017-06-06 12:00 - 2017-06-08 12:00 UTC Organizers:  catch cilefen David_Rothstein Fabianx stefan.r xjm Event type:  Online meeting (eg. IRC meeting)

The monthly core patch (bug fix) release window is this Wednesday, June 07. Drupal 8.3.3 and 7.55 will be released with fixes for Drupal 8 and 7.

To ensure a reliable release window for the patch release, there will be a Drupal 8.3.x commit freeze from 12:00 UTC Tuesday to 12:00 UTC Thursday. Now is a good time to update your development/staging servers to the latest 8.3.x-dev or 7.x-dev code and help us catch any regressions in advance. If you do find any regressions, please report them in the issue queue. Thanks!

To see all of the latest changes that will be included in the releases, see the 8.3.x commit log and 7.x commit log.

Other upcoming core release windows after this week include:

  • Wednesday, June 21 (security release window)
  • Wednesday, July 05 (patch release window)
  • Wednesday, October 5 (scheduled minor release)

For more information on Drupal core release windows, see the documentation on release timing and security releases, as well as the Drupal core release cycle overview.

One of the reasons that I love Drupal 8 is the fact it is object orientated and uses the Dependency Injection pattern with a centralized service container. If you’re new to the concept, here’s some links for some fun reading.

But for now the basics are: Things define their dependencies, and a centralized thing is able to give you an object instance with all of those dependencies provided. You don’t need to manually construct a class and provide its dependencies (constructor arguments.)

This also means we do not have to use concrete classes! That means you can modify the class used for a service without ripping apart other code. Yay for being decoupled(ish)!

Why is this cool?

So that’s great, and all. But let’s actually use a real example to show how AWESOME this is. In Drupal Commerce we have the commerce_cart.cart_session service. This is how we know if an anonymous user has a cart or not. We assume this service will implement the \Drupal\commerce_cart\CartSessionInterface interface, which means we don’t care how you tell us, just tell us via our agreed methods.

The default class uses the native session handling. But we’re going to swap that out and use cookies instead. Why? Because skipping the session will preserve page cache while browsing the site catalogs and product pages.

Let’s do it

Let’s kick it off by creating a module called commerce_cart_cookies. This will swap out the existing commerce_cart.cart_session service to use our own implementation which relies on cookies instead of the PHP session.

The obvious: we need a commerce_cart_cookies.info.yml

    name: Commerce Cart Cookies
    description: Uses cookies for cart session instead of PHP sessions
    core: 8.x
    type: module
    - commerce_cart

Now we need to create our class which will replace the default session handling. I’m not going to go into what the entire code would look like to satisfy the class, but the generic class would resemble the following. You can find a repo for this project at the end of the article.


namespace Drupal\commerce_cart_cookies;

use Drupal\commerce_cart\CartSessionInterface;
use Symfony\Component\HttpFoundation\RequestStack;

* Uses cookies to track active carts.
* We inject the request stack to handle cookies within the Request object,
* and not directly.
class CookieCartSession implements CartSessionInterface {

   * The current request.
   * \Symfony\Component\HttpFoundation\Request
  protected $request;

   * Creates a new CookieCartSession object.
   * \Symfony\Component\HttpFoundation\RequestStack $request_stack
   *   The request stack.
  public function __construct(RequestStack $request_stack) {
    $this->request = $request_stack->getCurrentRequest();

    * {}
    public function getCartIds($type = self::ACTIVE) {
    // TODO: Implement getCartIds() method.

    * {}
    public function addCartId($cart_id, $type = self::ACTIVE) {

    * {}
    public function hasCartId($cart_id, $type = self::ACTIVE) {
    // TODO: Implement hasCartId() method.

    * {}
    public function deleteCartId($cart_id, $type = self::ACTIVE) {
    // TODO: Implement hasCartId() method.


Next we’re going to make our service provider class. This is a bit magical, as we do not actually register it anywhere. It just needs to exist. Drupal will look for classes that end in ServiceProvider within all enabled modules. Based on the implementation you can add or alter services registered in the service container when it is being compiled (which is why the process is called rebuild! not just cache clear in Drupal 8.) The class must also start with a camel cased version of your module name. So our class will be CommerceCartCookiesServiceProvider.

Create a src directory in your module and a CommerceCartCookiesServiceProvider.php file within it. Let’s scaffold out the bare minimum for our class.


namespace Drupal\commerce_cart_cookies;

use Drupal\Core\DependencyInjection\ServiceProviderBase;

class CommerceCartCookiesServiceProvider extends ServiceProviderBase { }

Luckily for us all, core provides \Drupal\Core\DependencyInjection\ServiceProviderBase for us. This base class implements ServiceProviderInterface and ServiceModifierInterface to make it easier for us to modify the container. Let’s override the alter method so we can prepare to modify the commerce_cart.cart_session service.


namespace Drupal\commerce_cart_cookies;

use Drupal\Core\DependencyInjection\ContainerBuilder;
use Drupal\Core\DependencyInjection\ServiceProviderBase;

class CommerceCartCookiesServiceProvider extends ServiceProviderBase {

   * {}
  public function alter(ContainerBuilder $container) {
    if ($container->hasDefinition('commerce_cart.cart_session')) {
        ->setArguments([new Reference('request_stack')]);


We update the definition for commerce_cart.cart_session to use our class name, and also change it’s arguments to reflect our dependency on the request stack. The default service injects the session handler, whereas we need the request stack so we can retrieve cookies off of the current request.

The cart session service will now use our provided when the container is rebuilt!

The project code can be found at https://github.com/mglaman/commerce_cart_cookies

In this tutorial, we'll show you how to add a "Printer-friendly version" button to your Drupal articles. The main reason you'd want to do this is a courtesy for your readers. Many still print things they read online and you don't want them to waste that expensive printer ink just to print your logo and theme as well as the article.

This is a themed tutorial because our sister post "Creating Printer-friendly Versions of Wordpress Posts" with Wordpress tutorial covers the same topic.

Without this solution, you'd likely need to create a separate CSS file with styles specifically for the printed page.  Fortunately, the "Printer, email and PDF versions" Drupal community module makes this much easier. It will automatically create a printer-friendly version of each article.

This is the last part of a series on improving the way date ranges are presented in Drupal, by creating a field formatter that can omit the day, month or year where appropriate, displaying the date ranges in a nicer, more compact form:

  • 24–25 January 2017
  • 29 January–3 February 2017
  • 9:00am–4:30pm, 1 April 2017

In this post we look at adding an administrative interface, so site builders can add and edit formats from Drupal's UI.

Read more

Drupal 7 - Apache Solr Search, How to setup and how to index?

Install Solr on the machine, Setup the Core, Install and Configure the Apache Solr Search module and do the Indexing..

heykarthikwithu Mon, 06/05/2017 - 13:25

Drupal 8 has become much more flexible for doing pretty much everything. In this article I want to talk a bit about menu links and show you how powerful the new system is compared to Drupal 7.

In Drupal 7, menu links were a thing of their own with an API that you can use to create them programatically and put them in a menu. So if you wanted to deploy a menu link in code, you’d have to write an update hook and create the link programatically. All in a day’s…

We have much more control in Drupal 8. First, it has become significantly easier to do the same thing. Menu links are now plugins discovered from YAML files. So for example, to define a link in code, all you need is place the following inside a my_module.links.menu.yml file:

my_module.link_name: title: 'This is my link' description: 'See some stuff on this page.' route_name: my_module.route_it_points_to parent: my_module.optional_parent_link_name_it_belongs_under menu_name: the_menu_name_we_want_it_in weight: -1

And that’s it. If you specify a parent link which is in a menu, you no longer even need to specify the menu name. So clearing the cache will get this menu link created and added to your menu. And even more, removing this code will remove your menu link from the menu. With D7 you need another update hook to clear that link.

Second, you can do far more powerful things than this. In the example above, we know the route name and have hardcoded it there. But what if we don’t yet and have to grab it from somewhere dynamically. That is where plugin derivatives come into play. For more information about what these are and how they work, do check out my previous article on the matter.

So let’s see an example of how we can define menu links dynamically. First, let’s head back to our *.links.menu.yml file and add our derivative declaration and then explain what we are doing:

my_module.product_link: class: Drupal\my_module\Plugin\Menu\ProductMenuLink deriver: Drupal\my_module\Plugin\Derivative\ProductMenuLink menu_name: product

First of all, we want to create dynamically a menu link inside the product menu for all the products on our site. Let’s say those are entities.

There are two main things we need to define for our dynamic menu links: the class they use and the deriver class responsible for creating a menu link derivative for each product. Additionally, we can add here in the YAML file all the static information that will be common for all these links. In this case, the menu name they’ll be in is the same for all we might as well just add it here.

Next, we need to write those two classes. The first would typically go in the Plugin/Menu namespace of our module and can look as simple as this:

namespace Drupal\my_module\Plugin\Menu; use Drupal\Core\Menu\MenuLinkDefault; /** * Represents a menu link for a single Product. */ class ProductMenuLink extends MenuLinkDefault {}

We don’t even need to have any specific functionality in our class if we don’t need it. We can extend the MenuLinkDefault class which will contain all that is needed for the default interaction with menu links — and more important, implement the MenuLinkInterface which is required. But if we need to work with these programatically a lot, we can add some helper methods to access plugin information.

Next, we can write our deriver class that goes in the Plugin/Derivative namespace of our module:

<?php namespace Drupal\my_module\Plugin\Derivative; use Drupal\Component\Plugin\Derivative\DeriverBase; use Drupal\Core\Plugin\Discovery\ContainerDeriverInterface; use Drupal\Core\Entity\EntityTypeManagerInterface; use Symfony\Component\DependencyInjection\ContainerInterface; /** * Derivative class that provides the menu links for the Products. */ class ProductMenuLink extends DeriverBase implements ContainerDeriverInterface { /** * @var EntityTypeManagerInterface $entityTypeManager. */ protected $entityTypeManager; /** * Creates a ProductMenuLink instance. * * @param $base_plugin_id * @param EntityTypeManagerInterface $entity_type_manager */ public function __construct($base_plugin_id, EntityTypeManagerInterface $entity_type_manager) { $this->entityTypeManager = $entity_type_manager; } /** * {@inheritdoc} */ public static function create(ContainerInterface $container, $base_plugin_id) { return new static( $base_plugin_id, $container->get('entity_type.manager') ); } /** * {@inheritdoc} */ public function getDerivativeDefinitions($base_plugin_definition) { $links = []; // We assume we don't have too many... $products = $this->entityTypeManager->getStorage('product')->loadMultiple(); foreach ($products as $id => $product) { $links[$id] = [ 'title' => $product->label(), 'route_name' => $product->toUrl()->getRouteName(), 'route_parameters' => ['product' => $product->id()] ] + $base_plugin_definition; } return $links; } }

This is where most of the logic happens. First, we implement the ContainerDeriverInterface so that we can expose this class to the container and inject the Entity Type Manager. You can see the create() method signature is a bit different than you are used to. Second, we implement the getDerivativeDefinitions() method to return an array of plugin definitions based on the master definition (the one found in the YAML file). To this end, we load all our products and create the array of definitions.

Some things to note about this array of definitions. The keys of this array are the ID of the derivative, which in our case will match the Product IDs. However, the menu link IDs themselves will be made up of the following construct [my_module].product_link:[product-id]. That is the name of the link we set in the YAML file + the derivative ID, separated by a colon.

The route name we add to the derivative is the canonical route of the product entity. And because this route is dynamic (has arguments) we absolutely must also have the route_parameters key where we add the necessary parameters for building this route. Had the route been static, no route params would have been necessary.

Finally, each definition is made up of what we specify here + the base plugin definition for the link (which actually includes also all the things we added in the YAML file). If we need to interact programatically with these links and read some basic information about the products themselves, we can use the options key and store that data. This can then be read by helper methods in the Drupal\my_module\Plugin\Menu\ProductMenuLink class.

And that’s it. Now if we clear the cache, all our products are in the menu. If we create another product, it’s getting added to the menu (once the caches are cleared).


You know how you can define action links and local tasks (tabs) in the same way as menu link? In their respective YAML files? Well the same applies for the derivatives. So using this same technique, you can define local actions and tasks dynamically. The difference is that you will have a different class to extend for representing the links. For local tasks it is LocalTaskDefault and for local actions it is LocalActionDefault.


In this article we saw how we can dynamically create menu links in Drupal 8 using derivatives. In doing so, we also got a brief refresher on how derivatives work. This is a very powerful subsystem of the Plugin API which hides a lot of powerful functionality. You just gotta dig it out and use it.

As of today, the Drupal Matrix API module now supports sending messages to a room via Rules. Now you can automatically configure notifications to Matrix rooms without touching any code!

This is useful if you want to get notified in a Matrix room of some event on your website, such as a new comment, a user registration, updated content, etc.

Rules is still in Alpha, and has some UI quirks, but it works fine.

DrupalMatrixDrupal 8Drupal PlanetIntegration
We hope you are informed as much as possible about Drupal things. We are trying to deliver them to you as much as possible. One of the ways is by looking at the best work from other authors from the past month. Therefore, here are the best Drupal blogs from May. We will start our list with Improvements and changes in Commerce 2.x by Sascha Grossenbacher. In this blog post, the author focuses on explaining some of the key differences in the new version of Drupal Commerce and how they affect developers and users. Our second choice is What makes DrupalCon different? from Dagny Evans. She… READ MORE

There is a real “elixir of vivacity” that can help your Drupal website or app come alive in a way it never has. Sound lucrative? You’ll discover the rest from today’s story. After a glimpse at combining Drupal with AngularJS, we are now moving on to another member of the JavaScript family that is rapidly gaining popularity — Node.js. Let’s discover the reasons for its recognition, the benefits of using Node.js with Drupal, and the tool that helps you bring them together.

Read more

Responsive design brings a fascinating array of challenges to both designers and developers. Using background images in a call to action or blockquote element is a great way to add visual appeal to a design, as you can see in the image to the left.

However, at mobile sizes, you’re faced with some tough decisions. Do you try and stretch the image to fit the height of the container? If so, at very tall/narrow widths, you’re forced to load a giant image, and it likely won’t be recognizable.

In addition, forcing mobile users to load a large image is bad for performance. Creating custom responsive image sets would work, but that sets up a maintenance problem, something most clients will not appreciate.

Luckily, there’s a solution that allows us to keep the image aspect ratio, set up standard responsive images, and it looks great on mobile as well. The fade-out!

I’ll be using screenshots and code here, but I’ve also made all 6 steps available on CodePen if you want to play with the code and try out different colors, images, etc…

Let’s start with that first blockquote:

(pen) This is set up for desktop - the image aspect ratio is determining the height of the container using the padding ratio trick. Everything in the container is using absolute positioning and flexbox for centering. We have a simple rgba() background set using the :before pseudo-property in the .parent-container:

:before { content: ""; display: block; position: absolute; width: 100%; height: 100%; background-color: rgba(0,0,0,0.4); z-index: 10; top: 0; }

(pen) The issues arise once we get a quote of reasonable length, and/or the page width gets too small. As you can see, it overflows and breaks quite badly.

(pen) We can fix this by setting some changes to take place at a certain breakpoint, depending on the max length of the field and the size of the image used.

Specifically, we remove the padding from the parent element, and make the .content-wrapper position: static. (I like to set a min-height as well just in case the content is very small)

(pen) Now we can add the fader code - background-image: linear-gradient, which can be used unprefixed. This is inserted into the .image-wrapper using another :before pseudo-element:

:before { content: ""; display: inline-block; position: absolute; width: 100%; height: 100%; background-image: linear-gradient( // Fade over the entire image - not great. rgba(0, 0, 0, 0.0) 0%, rgba(255, 0, 0, 1.0) 100% ); };

(pen) The issue now is that the gradient covers the entire image, but we can fix that easily by adding additional rgba() values, in effect ‘stretching’ the part of the gradient that’s transparent:

:before { background-image: linear-gradient( // Transparent at the top. rgba(0, 0, 0, 0.0) 0%, // Still transparent through 70% of the image. rgba(0, 0, 0, 0.0) 70%, // Now fade to solid to match the background. rgba(255, 0, 0, 1.0) 100% ); }

(pen) Finally, we can fine-tune the gradient by adding even more rgba() values and setting the percentages and opacity as appropriate.

Once we’re satisfied that the gradient matches the design, all that’s left is to make the gradient RGBA match the .parent-container background color (not the overlay - this tripped me up for a while!), which in our case is supposed to be #000:

:before { background-image: linear-gradient( rgba(0, 0, 0, 0.0) 0%, rgba(0, 0, 0, 0.0) 70%, // These three 'smooth' out the fade. rgba(0, 0, 0, 0.2) 80%, rgba(0, 0, 0, 0.7) 90%, rgba(0, 0, 0, 0.9) 95%, // Solid to match the background. rgba(0, 0, 0, 1.0) 100% ); }

We’ll be rolling out sites in a few weeks with these techniques in live code, and with several slight variations to the implementation (mostly adding responsive images and making allowances for Drupal’s markup), but this is the core idea used.

Feel free to play with the code yourself, and change the rgba() values so that you can see what each is doing.

docker-console init --tpl drupal7 People who follow our blog already know that we’re using Docker at Droptica. We also already told you how easy it is to start a project using our docker-drupal application (https://www.droptica.pl/blog/poznaj-aplikacje-docker-drupal-w-15-minut-docker-i-przyklad-projektu-na-drupal-8/). Another step on the road to getting efficient and proficient with Docker is docker-console application, which is a newer version of docker-drupal, and exactly like its predecessor it was created in order to make building a working environment for Drupal simple and more efficient. How does it all work? You are going to see that in this write-up. Since we're all working on Linux (mainly on Ubuntu), all commands shown in this post were executed on Ubuntu 16.04.
Acquia Showcases Headless Drupal Development for Boreal Mountain Resort Acquia Showcases Headless Drupal Development for Boreal Mountain Resort Nate Gengler Tue, 06/06/2017 - 12:59

We recently launched our first decoupled Drupal site for Boreal Mountain Resort. Working closely with hosting platform, Acquia, and front end developers, Hoorooh Digital, we spun up rideboreal.com as a fully customized front end experience with the back-end framework of Drupal 8.

Our hosting partners, Acquia, recapped the build with a fantastic blog post. It offers an in-depth look at the working relationship between Elevated Third, Acquia and Hoorooh Digital.

There is always satisfaction in retracing the progression of a project from initial discovery to final site launch. But more than an excuse to pat ourselves on the back, reflecting on projects helps us improve. It gives us a sense of how we stack up against our original goals and provides context for future builds.
For more information on decoupled Drupal Development and other industry news, Acquia’s blog is an awesome resource. Check it out! 




If you want to add Google's reCaptcha (https://www.google.com/recaptcha/intro/index.html) to your Drupal 7 forms programmatically you need to follow these two steps:

1) Install and enable captcha (https://www.drupal.org/project/captcha) and recaptcha (https://www.drupal.org/project/recaptcha) modules. The best ...

Read now

"The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect."
- Tim Berners-Lee, W3C Director and inventor of the World Wide Web

As a community, Drupal wants to be sure that the websites and the features we build are accessible to everyone, including those who have disabilities. To be inclusive we must think beyond color contrasts, font scaling, and alt texts. Identifying the barriers and resolving them is fundamental in making the web inclusive for everyone.

Accessibility fosters social equality and inclusion for not just those with disabilities but also those with intermittent internet access in rural communities and developing nations.

The Bay Area is fortunate to have Mike Gifford visiting from Canada and he carries with him unique perspectives on web accessibility. Hook 42 has organized an evening with Mike for conversation, collaboration, and thought leadership surrounding Drupal Accessibility.

How do you import an RSS feed into entities with Drupal 8? In Drupal 6 and 7, you probably used the Feeds module. Feeds 7 made it easy (-ish) to click together a configuration that matches an RSS (or any XML, or CSV, or OPML, etc) source to a Drupal entity type, maps source data into Drupal fields, and runs an import with the site Cron. Where has that functionality gone in D8? I recently had to build a podcast mirror for a client that needed this functionality, and I was surprised at what I found.

Feeds module doesn’t have a stable release candidate, and it doesn’t look like one is coming any time soon. They’re still surveying people about what feeds module should even DO in D8. As the module page explains:

It’s not ready yet, but we are brainstorming about what would be the best way forward. Want to help us? Fill in our survey.
If you decide to use it, don’t be mad if we break it later.

This does not inspire confidence.

The next great candidate is Aggregator module (in core). Unfortunately, Aggregator gives you no control over the kind of entity to create, let alone any kind of field mapping. It imports content into its own Aggregated Content entity, with everything in one field, and linking offsite. I suppose you could extend it to choose you own entity type, map fields etc, but that seems like a lot of work for such a simple feature.

Frustrating, right?

What if I told you that Drupal 8 can do everything Feeds 7 can?

What if I told you that it’s even better: instead of clicking through endless menus and configuration links, waiting for things to load, missing problems, and banging your head against the mouse, you can set this up with one simple piece of text. You can copy and paste it directly from this blog post into Drupal’s admin interface.

What? How?

Drupal 8 can do all the Feedsy stuff you like with Migrate module. Migrate in D8 core already contains all the elements you need to build a regular importer of ANYTHING into D8. Add a couple of contrib modules to provide specific plugins for XML sources and convenience drush functions, and baby you’ve got a stew goin’!

Here’s the short version Howto:

1) Download and enable migrate_plus and migrate_tools modules. You should be doing this with composer, but I won’t judge. Just get them into your codebase and enable them. Migrate Plus provides plugins for core Migrate, so you can parse remote XML, JSON, CSV, or even arbitrary spreadsheet data. Migrate Tools gives us drush commands for running migrations.

2) Write your Migration configuration in text, and paste it into the configuration import admin page (admin/config/development/configuration/single/import), or import it another way. I’ve included a starter YAML just below, you should be able to copypasta, change a few values, and be done in time for tea.

3) Add a line to your system cron to run drush migrate -y my_rss_importer at whatever interval you like.

That’s it. One YAML file, most of which is copypasta. One cronjob. All done!

Here’s my RSS importer config for your copy and pasting pleasure. If you’re already comfortable with migration YAMLs and XPaths, just add the names of your RSS fields as selectors in the source section, map them to drupal fields in the process section, and you’re all done!

If you aren’t familiar with this stuff yet, don’t worry! We’ll dissect this together, below.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 id: my_rss_importer label: 'Import my RSS feed' status: true source: plugin: url data_fetcher_plugin: http urls: 'https://example.com/feed.rss' data_parser_plugin: simple_xml item_selector: /rss/channel/item fields: - name: guid label: GUID selector: guid - name: title label: Title selector: title - name: pub_date label: 'Publication date' selector: pubDate - name: link label: 'Origin link' selector: link - name: summary label: Summary selector: 'itunes:summary' - name: image label: Image selector: 'itunes:image[''href'']' ids: guid: type: string destination: plugin: 'entity:node' process: title: title field_remote_url: link body: summary created: plugin: format_date from_format: 'D, d M Y H:i:s O' to_format: 'U' source: pub_date status: plugin: default_value default_value: 1 type: plugin: default_value default_value: podcast_episode

Some of you can just stop here. If you’re familiar with the format and the structures involved, this example is probably all you need to set up your easy RSS importer.

In the interest of good examples for Migrate module though, I’m going to continue. Read on if you want to learn more about how this config works, and how you can use Migrate to do even more amazing things…

Anatomy of a migration YAML

Let’s dive into that YAML a bit. Migrate is one of the most powerful components of Drupal 8 core, and this configuration is your gateway to it.

That YAML looks like a lot, but it’s really just 4 sections. They can appear in any order, but we need all 4: General information, source, destination, and data processing. This isn’t rocket science after all! Let’s look at these sections one at a time.

General information

1 2 3 id: my_rss_importer label: 'My RSS feed importer' status: true

This is the basic stuff about the migration configuration. At a minimum it needs a unique machine-readable ID, a human-readable label, and status: true so it’s enabled. There are other keys you can include here for fun extra features, like module dependencies, groupings (so you can run several imports together!), tags, and language. These are the critical ones, though.


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 source: plugin: url data_fetcher_plugin: file urls: 'https://example.com/feed.rss' data_parser_plugin: simple_xml item_selector: /rss/channel/item fields: - name: guid label: GUID selector: guid - name: title label: Item Title selector: title - name: pub_date label: 'Publication date' selector: pubDate - name: link label: 'Origin link' selector: link - name: summary label: Summary selector: 'itunes:summary' ids: guid: type: string

This is the one that intimidates most people: it’s where you describe the RSS source. Migrate module is even more flexible than Feeds was, so there’s a lot to specify here… but it all makes sense if you take it in small pieces.

First: we want to use a remote file, so we’ll use the Url plugin (there are others, but none that we care about right now). All the rest of the settings belong to the Url plugin, even though they aren’t indented or anything.

There are two possibilities for Url’s data_fetcher setting: file and http. file is for anything you could pass to PHP’s file_get_contents, including remote URLs. There are some great performance tricks in there, so it’s a good option for most use cases. We’ll be using file for our example. http is specifically for remote files accessed over HTTP, and lets you use the full power of the HTTP spec to get your file. Think authentication headers, cache rules, etc.

Next we declare which plugin will read (parse) the data from that remote URL. We can read JSON, SOAP, arbitrary XML… in our use case this is an RSS feed, so we’ll use one of the XML plugins. SimpleXML is just what it sounds like: a simple way to get data out of XML. In extreme use cases you might use XML instead, but I haven’t encountered that yet (ever, anywhere, in any of my projects). TL;DR: SimpleXML is great. Use it.

Third, we have to tell the source where it can find the actual items to import. XML is freeform, so there’s no way for Migrate to know where the future “nodes” are in the document. So you have to give it the XPath to the items. RSS feeds have a standardized path: /rss/channel/item.

Next we have to identify the “fields” in the source. You see, migrate module is built around the idea that you’ll map source fields to destination fields. That’s core to how it thinks about the whole process. Since XML (and by extension RSS) is an unstructured format – it doesn’t think of itself as having “fields” at all. So we’ll have to give our source plugin XPaths for the data we want out of the feed, assigning each path to a virtual “field”. These “fake fields” let Migrate treat this source just like any other.

If you haven’t worked with XPaths before, the example YAML in this post gives you most of what you need to know. It’s just a simple text system for specifying a tag within an unstructured XML document. Not too complicated when you get into it. You may want to find a good tutorial to learn some of the tricks.

Let’s look at one of these “fake fields”:

1 2 3 name: summary label: Summary selector: 'itunes:summary'

name is how we’ll address this field in the rest of the migration. It’s the source “field name”. label is the human readable name for the field. selector is the XPath inside the item. Most items are flat – certainly in RSS – so it’s basically just the tag that surrounds the data you want. There, was that so hard?

As a side note, you can see that my RSS feeds tend to be for iTunes. Sometimes the world eats an apple, sometimes an apple eats the world. Buy me a beer at Drupalcon and we can argue about standards.

Fifth and finally, we identify which “field” in the source contains a unique identifier. Migrate module keeps track of the association between the source and destination objects, so it can handle updates, rollbacks, and more. The example YAML relies on the very common (but technically optional) <guid> tag as a unique identifier.


1 2 destination: plugin: 'entity:node'

Yep, it’s that simple. This is where you declare what Drupal entity type will receive the data. Actually, you could write any sort of destination plugin for this – if you want Drupal to migrate data into some crazy exotic system, you can do it! But in 99.9% of cases you’re migrating into Drupal entities, so you’ll want entity:something here. Don’t worry about bundles (content types) here; that’s something we take care of in field mapping.


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 process: title: title field_remote_url: link body: summary created: plugin: format_date from_format: 'D, d M Y H:i:s O' to_format: 'U' source: pub_date status: plugin: default_value default_value: 1 type: plugin: default_value default_value: podcast_episode

This is where the action happens: the process section describes how destination fields should get their data from the source. It’s the “field mapping”, and more. Each key is a destination field, each value describes where the data comes from.

If you don’t want to migrate the whole field exactly as it’s presented in the source, you can put individual fields through Migrate plugins. These plugins apply all sorts of changes to the source content, to get it into the shape Drupal needs for a field value. If you want to take a substring from the source, explode it into an array, extract one array value and make sure it’s a valid Drupal machine name, you can do that here. I won’t do it in my example because that sort of thing isn’t common for RSS feeds, but it’s definitely possible.

The examples of plugins that you see here are simple ones. status and type show you how to set a fixed field value. There are other ways, but the default_value plugin is the best way to keep your sanity.

The created field is a bit more interesting. The Drupal field is a unix timestamp of the time a node was authored. The source RSS uses a string time format, though. We’ll use the format_date plugin to convert between the two. Neat, eh?

Don’t forget to map values into Drupal’s status and type fields! type is especially important: that’s what determines the content type, and nodes can’t be saved without it!

That’s it?

Yes, that’s it. You now have a migrator that pulls from any kind of remote source, and creates Drupal entities out of the items it finds. Your system cron entry makes sure this runs on a regular schedule, rather than overloading Drupal’s cron.

More importantly, if you’re this comfortable with Migrate module, you’ve just gained a lot of new power. This is a framework for getting data from anywhere, to anywhere, with a lot of convenience functionality in between.

Happy feeding!

Tips and tricks

OK I lied, there is way more to say about Migrate. It’s a wonderful, extensible framework, and that means there are lots of options for you. Here are some of the obstacles and solutions I’ve found helpful.

Importing files

Did you notice that I didn’t map the images into Drupal fields in my example? That’s because it’s a bit confusing. We actually have an image URL that we need to download, then we have to create a file entity based on the downloaded file, and then we add the File ID to the node’s field as a value. That’s more complicated than I wanted to get into in the general example.

To do this, we have to create a pipeline of plugins that will operate in sequence, to create the value we want to stick in our field_image. It looks something like this:

1 2 3 4 5 6 7 8 9 field_image: - plugin: download source: - image - constants/destination_uri rename: true - plugin: entity_generate

Looking at that download plugin, image seems clear. That’s the source URL we got out of the RSS feed. But what is constants/destination_uri, I hear you cry? I’m glad you asked. It’s a constant, which I added in the source section and didn’t tell you about. You can add any arbitrary keys to the source section, and they’ll be available like this in processing. It is good practice to lump all your constants together into one key, to keep the namespace clean. This is what it looks like:

1 2 3 4 source: ... usual source stuff here ... constants: destination_uri: 'public://my_rss_feed/post.jpg'

Before you ask, yes this is exactly the same as using the default_value plugin. Still, default_value is preferred for readability wherever possible. In this case it isn’t really possible.

Also, note that the download plugin lets me set rename: true. This means that in case of a name conflict, a 0, 1, 2, 3 etc will be added to the end of the filename.

You can see the whole structure here, of one plugin passing its result to the next. You can chain unlimited plugins together this way…

Multiple interrelated migrations

One of the coolest tricks that Migrate can do is to manage interdependencies between migrations. Maybe you don’t want those images just as File entities, you actually want them in Paragraphs, which should appear in the imported node. Easy-peasy.

First, you have to create a second migration for the Paragraph. Technically you should have a separate Migration YAML for each destination entity type. (yes, entity_generate is a dirty way to get around it, use it sparingly). So we create our second migration just for the paragraph, like this:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 id: my_rss_images_importer label: 'Import the images from my RSS feed' status: true source: plugin: url data_fetcher_plugin: http urls: 'https://example.com/feed.rss' data_parser_plugin: simple_xml item_selector: /rss/channel/item fields: - name: guid label: GUID selector: guid - name: image label: Image selector: 'itunes:image[''href'']' ids: guid: type: string constants: destination_uri: 'public://my_rss_feed/post.jpg' destination: plugin: 'entity:paragraph' process: type: plugin: default_value default_value: podcast_image field_image: - plugin: download source: - image - constants/destination_uri rename: true - plugin: entity_generate

If you look at that closely, you’ll see it’s a simpler version of the node migration we did at first. I did the copy pasting myself! Here are the differences:

  • Different ID and label (duh)
  • We only care about two “fields” on the source: GUID and the image URL.
  • The destination is a paragraph instead of a node.
  • We’re doing the image trick I just mentioned.

Now, in the node migration, we can add our paragraphs field to the “process” section like this:

1 2 3 4 field_paragraphs: plugin: migration_lookup migration: my_rss_images_importer source: guid

We’re using the migration_lookup plugin. This plugin takes the value of the field given in source, and looks it up in my_rss_images_importer to see if anything with that source ID was migrated. Remember where we configured the source plugin to know that guid was the unique identifier for each item in this feed? That comes in handy here.

So we pass the guid to migration_lookup, and it returns the id of the paragraph which was created for that guid. It finds out what Drupal entity ID corresponds to that source ID, and returns the Drupal entity ID to use as a field value. You can use this trick to associate content migrated from separate feeds, totally separate data sources, or whatever.

You should also add a dependency on my_rss_images_importer at the bottom of your YAML file, like this:

1 2 3 migration_dependencies: required: - my_rss_images_importer

This will ensure that my_rss_images_importer will always run before my_rss_importer.

(NB: in Drupal < 8.3, this plugin is called migration)

Formatting dates

Very often you will receive dates in a format other than what Drupal wants to accept as a valid field value. In this case the format_date process plugin comes in very handy, like this:

1 2 3 4 5 field_published_date: plugin: format_date from_format: 'D, d M Y H:i:s O' to_format: 'Y-m-d\TH:i:s' source: pub_date

This one is pretty self-explanatory: from format, to format, and source. This is important when migrating from Drupal 6, whose date fields store dates differently from 8. It’s also sometimes handy for RSS feeds. :)

Drush commands

Very important for testing, and the whole reason we have migrate_plus module installed! Here are some handy drush commands for interacting with your migration:

  • drush ms: Gives you the status of all known migrations. How many items are there to import? How many have been imported? Is the import running?
  • drush migrate-rollback: Rolls back one or more migrations, deleting all the imported content.
  • drush migrate-messages: Get logged messages for a particular migration.
  • drush mi: Runs a migration. use --all to run them all. Don’t worry, Migrate will sort out any dependencies you’ve declared and run them in the right order. Also worth noting: --limit=10 does a limited run of 10 items, and --feedback=10 gives you an in-progress status line every 10 items (otherwise you get nothing until it’s finished!).

Okay, now that’s really it. Happy feeding!