Drupal Feeds

Start:  2017-06-06 12:00 - 2017-06-08 12:00 UTC Organizers:  catch cilefen David_Rothstein Fabianx stefan.r xjm Event type:  Online meeting (eg. IRC meeting)

The monthly core patch (bug fix) release window is this Wednesday, June 07. Drupal 8.3.3 and 7.55 will be released with fixes for Drupal 8 and 7.

To ensure a reliable release window for the patch release, there will be a Drupal 8.3.x commit freeze from 12:00 UTC Tuesday to 12:00 UTC Thursday. Now is a good time to update your development/staging servers to the latest 8.3.x-dev or 7.x-dev code and help us catch any regressions in advance. If you do find any regressions, please report them in the issue queue. Thanks!

To see all of the latest changes that will be included in the releases, see the 8.3.x commit log and 7.x commit log.

Other upcoming core release windows after this week include:

  • Wednesday, June 21 (security release window)
  • Wednesday, July 05 (patch release window)
  • Wednesday, October 5 (scheduled minor release)

For more information on Drupal core release windows, see the documentation on release timing and security releases, as well as the Drupal core release cycle overview.

One of the reasons that I love Drupal 8 is the fact it is object orientated and uses the Dependency Injection pattern with a centralized service container. If you’re new to the concept, here’s some links for some fun reading.

But for now the basics are: Things define their dependencies, and a centralized thing is able to give you an object instance with all of those dependencies provided. You don’t need to manually construct a class and provide its dependencies (constructor arguments.)

This also means we do not have to use concrete classes! That means you can modify the class used for a service without ripping apart other code. Yay for being decoupled(ish)!

Why is this cool?

So that’s great, and all. But let’s actually use a real example to show how AWESOME this is. In Drupal Commerce we have the commerce_cart.cart_session service. This is how we know if an anonymous user has a cart or not. We assume this service will implement the \Drupal\commerce_cart\CartSessionInterface interface, which means we don’t care how you tell us, just tell us via our agreed methods.

The default class uses the native session handling. But we’re going to swap that out and use cookies instead. Why? Because skipping the session will preserve page cache while browsing the site catalogs and product pages.

Let’s do it

Let’s kick it off by creating a module called commerce_cart_cookies. This will swap out the existing commerce_cart.cart_session service to use our own implementation which relies on cookies instead of the PHP session.

The obvious: we need a commerce_cart_cookies.info.yml

    name: Commerce Cart Cookies
    description: Uses cookies for cart session instead of PHP sessions
    core: 8.x
    type: module
    dependencies:
    - commerce_cart

Now we need to create our class which will replace the default session handling. I’m not going to go into what the entire code would look like to satisfy the class, but the generic class would resemble the following. You can find a repo for this project at the end of the article.

    <?php

namespace Drupal\commerce_cart_cookies;

use Drupal\commerce_cart\CartSessionInterface;
use Symfony\Component\HttpFoundation\RequestStack;

/**
* Uses cookies to track active carts.
*
* We inject the request stack to handle cookies within the Request object,
* and not directly.
*/
class CookieCartSession implements CartSessionInterface {

  /**
   * The current request.
   *
   * \Symfony\Component\HttpFoundation\Request
   */
  protected $request;

  /**
   * Creates a new CookieCartSession object.
   *
   * \Symfony\Component\HttpFoundation\RequestStack $request_stack
   *   The request stack.
   */
  public function __construct(RequestStack $request_stack) {
    $this->request = $request_stack->getCurrentRequest();
    }

    /**
    * {}
    */
    public function getCartIds($type = self::ACTIVE) {
    // TODO: Implement getCartIds() method.
    }

    /**
    * {}
    */
    public function addCartId($cart_id, $type = self::ACTIVE) {
    }

    /**
    * {}
    */
    public function hasCartId($cart_id, $type = self::ACTIVE) {
    // TODO: Implement hasCartId() method.
    }

    /**
    * {}
    */
    public function deleteCartId($cart_id, $type = self::ACTIVE) {
    // TODO: Implement hasCartId() method.
    }

    }

Next we’re going to make our service provider class. This is a bit magical, as we do not actually register it anywhere. It just needs to exist. Drupal will look for classes that end in ServiceProvider within all enabled modules. Based on the implementation you can add or alter services registered in the service container when it is being compiled (which is why the process is called rebuild! not just cache clear in Drupal 8.) The class must also start with a camel cased version of your module name. So our class will be CommerceCartCookiesServiceProvider.

Create a src directory in your module and a CommerceCartCookiesServiceProvider.php file within it. Let’s scaffold out the bare minimum for our class.

    <?php

namespace Drupal\commerce_cart_cookies;

use Drupal\Core\DependencyInjection\ServiceProviderBase;

class CommerceCartCookiesServiceProvider extends ServiceProviderBase { }

Luckily for us all, core provides \Drupal\Core\DependencyInjection\ServiceProviderBase for us. This base class implements ServiceProviderInterface and ServiceModifierInterface to make it easier for us to modify the container. Let’s override the alter method so we can prepare to modify the commerce_cart.cart_session service.

        <?php

namespace Drupal\commerce_cart_cookies;

use Drupal\Core\DependencyInjection\ContainerBuilder;
use Drupal\Core\DependencyInjection\ServiceProviderBase;

class CommerceCartCookiesServiceProvider extends ServiceProviderBase {

  /**
   * {}
   */
  public function alter(ContainerBuilder $container) {
    if ($container->hasDefinition('commerce_cart.cart_session')) {
        $container->getDefinition('commerce_cart.cart_session')
        ->setClass(CookieCartSession::class)
        ->setArguments([new Reference('request_stack')]);
        }
        }

        }
   

We update the definition for commerce_cart.cart_session to use our class name, and also change it’s arguments to reflect our dependency on the request stack. The default service injects the session handler, whereas we need the request stack so we can retrieve cookies off of the current request.

The cart session service will now use our provided when the container is rebuilt!

The project code can be found at https://github.com/mglaman/commerce_cart_cookies

In this tutorial, we'll show you how to add a "Printer-friendly version" button to your Drupal articles. The main reason you'd want to do this is a courtesy for your readers. Many still print things they read online and you don't want them to waste that expensive printer ink just to print your logo and theme as well as the article.

This is a themed tutorial because our sister post "Creating Printer-friendly Versions of Wordpress Posts" with Wordpress tutorial covers the same topic.

Without this solution, you'd likely need to create a separate CSS file with styles specifically for the printed page.  Fortunately, the "Printer, email and PDF versions" Drupal community module makes this much easier. It will automatically create a printer-friendly version of each article.

This is the last part of a series on improving the way date ranges are presented in Drupal, by creating a field formatter that can omit the day, month or year where appropriate, displaying the date ranges in a nicer, more compact form:

  • 24–25 January 2017
  • 29 January–3 February 2017
  • 9:00am–4:30pm, 1 April 2017

In this post we look at adding an administrative interface, so site builders can add and edit formats from Drupal's UI.

Read more

Drupal 7 - Apache Solr Search, How to setup and how to index?

Install Solr on the machine, Setup the Core, Install and Configure the Apache Solr Search module and do the Indexing..

heykarthikwithu Mon, 06/05/2017 - 13:25

Drupal 8 has become much more flexible for doing pretty much everything. In this article I want to talk a bit about menu links and show you how powerful the new system is compared to Drupal 7.

In Drupal 7, menu links were a thing of their own with an API that you can use to create them programatically and put them in a menu. So if you wanted to deploy a menu link in code, you’d have to write an update hook and create the link programatically. All in a day’s…

We have much more control in Drupal 8. First, it has become significantly easier to do the same thing. Menu links are now plugins discovered from YAML files. So for example, to define a link in code, all you need is place the following inside a my_module.links.menu.yml file:

my_module.link_name: title: 'This is my link' description: 'See some stuff on this page.' route_name: my_module.route_it_points_to parent: my_module.optional_parent_link_name_it_belongs_under menu_name: the_menu_name_we_want_it_in weight: -1

And that’s it. If you specify a parent link which is in a menu, you no longer even need to specify the menu name. So clearing the cache will get this menu link created and added to your menu. And even more, removing this code will remove your menu link from the menu. With D7 you need another update hook to clear that link.

Second, you can do far more powerful things than this. In the example above, we know the route name and have hardcoded it there. But what if we don’t yet and have to grab it from somewhere dynamically. That is where plugin derivatives come into play. For more information about what these are and how they work, do check out my previous article on the matter.

So let’s see an example of how we can define menu links dynamically. First, let’s head back to our *.links.menu.yml file and add our derivative declaration and then explain what we are doing:

my_module.product_link: class: Drupal\my_module\Plugin\Menu\ProductMenuLink deriver: Drupal\my_module\Plugin\Derivative\ProductMenuLink menu_name: product

First of all, we want to create dynamically a menu link inside the product menu for all the products on our site. Let’s say those are entities.

There are two main things we need to define for our dynamic menu links: the class they use and the deriver class responsible for creating a menu link derivative for each product. Additionally, we can add here in the YAML file all the static information that will be common for all these links. In this case, the menu name they’ll be in is the same for all we might as well just add it here.

Next, we need to write those two classes. The first would typically go in the Plugin/Menu namespace of our module and can look as simple as this:

namespace Drupal\my_module\Plugin\Menu; use Drupal\Core\Menu\MenuLinkDefault; /** * Represents a menu link for a single Product. */ class ProductMenuLink extends MenuLinkDefault {}

We don’t even need to have any specific functionality in our class if we don’t need it. We can extend the MenuLinkDefault class which will contain all that is needed for the default interaction with menu links — and more important, implement the MenuLinkInterface which is required. But if we need to work with these programatically a lot, we can add some helper methods to access plugin information.

Next, we can write our deriver class that goes in the Plugin/Derivative namespace of our module:

<?php namespace Drupal\my_module\Plugin\Derivative; use Drupal\Component\Plugin\Derivative\DeriverBase; use Drupal\Core\Plugin\Discovery\ContainerDeriverInterface; use Drupal\Core\Entity\EntityTypeManagerInterface; use Symfony\Component\DependencyInjection\ContainerInterface; /** * Derivative class that provides the menu links for the Products. */ class ProductMenuLink extends DeriverBase implements ContainerDeriverInterface { /** * @var EntityTypeManagerInterface $entityTypeManager. */ protected $entityTypeManager; /** * Creates a ProductMenuLink instance. * * @param $base_plugin_id * @param EntityTypeManagerInterface $entity_type_manager */ public function __construct($base_plugin_id, EntityTypeManagerInterface $entity_type_manager) { $this->entityTypeManager = $entity_type_manager; } /** * {@inheritdoc} */ public static function create(ContainerInterface $container, $base_plugin_id) { return new static( $base_plugin_id, $container->get('entity_type.manager') ); } /** * {@inheritdoc} */ public function getDerivativeDefinitions($base_plugin_definition) { $links = []; // We assume we don't have too many... $products = $this->entityTypeManager->getStorage('product')->loadMultiple(); foreach ($products as $id => $product) { $links[$id] = [ 'title' => $product->label(), 'route_name' => $product->toUrl()->getRouteName(), 'route_parameters' => ['product' => $product->id()] ] + $base_plugin_definition; } return $links; } }

This is where most of the logic happens. First, we implement the ContainerDeriverInterface so that we can expose this class to the container and inject the Entity Type Manager. You can see the create() method signature is a bit different than you are used to. Second, we implement the getDerivativeDefinitions() method to return an array of plugin definitions based on the master definition (the one found in the YAML file). To this end, we load all our products and create the array of definitions.

Some things to note about this array of definitions. The keys of this array are the ID of the derivative, which in our case will match the Product IDs. However, the menu link IDs themselves will be made up of the following construct [my_module].product_link:[product-id]. That is the name of the link we set in the YAML file + the derivative ID, separated by a colon.

The route name we add to the derivative is the canonical route of the product entity. And because this route is dynamic (has arguments) we absolutely must also have the route_parameters key where we add the necessary parameters for building this route. Had the route been static, no route params would have been necessary.

Finally, each definition is made up of what we specify here + the base plugin definition for the link (which actually includes also all the things we added in the YAML file). If we need to interact programatically with these links and read some basic information about the products themselves, we can use the options key and store that data. This can then be read by helper methods in the Drupal\my_module\Plugin\Menu\ProductMenuLink class.

And that’s it. Now if we clear the cache, all our products are in the menu. If we create another product, it’s getting added to the menu (once the caches are cleared).

Bonus

You know how you can define action links and local tasks (tabs) in the same way as menu link? In their respective YAML files? Well the same applies for the derivatives. So using this same technique, you can define local actions and tasks dynamically. The difference is that you will have a different class to extend for representing the links. For local tasks it is LocalTaskDefault and for local actions it is LocalActionDefault.

Summary

In this article we saw how we can dynamically create menu links in Drupal 8 using derivatives. In doing so, we also got a brief refresher on how derivatives work. This is a very powerful subsystem of the Plugin API which hides a lot of powerful functionality. You just gotta dig it out and use it.

As of today, the Drupal Matrix API module now supports sending messages to a room via Rules. Now you can automatically configure notifications to Matrix rooms without touching any code!

This is useful if you want to get notified in a Matrix room of some event on your website, such as a new comment, a user registration, updated content, etc.

Rules is still in Alpha, and has some UI quirks, but it works fine.

DrupalMatrixDrupal 8Drupal PlanetIntegration
We hope you are informed as much as possible about Drupal things. We are trying to deliver them to you as much as possible. One of the ways is by looking at the best work from other authors from the past month. Therefore, here are the best Drupal blogs from May. We will start our list with Improvements and changes in Commerce 2.x by Sascha Grossenbacher. In this blog post, the author focuses on explaining some of the key differences in the new version of Drupal Commerce and how they affect developers and users. Our second choice is What makes DrupalCon different? from Dagny Evans. She… READ MORE

There is a real “elixir of vivacity” that can help your Drupal website or app come alive in a way it never has. Sound lucrative? You’ll discover the rest from today’s story. After a glimpse at combining Drupal with AngularJS, we are now moving on to another member of the JavaScript family that is rapidly gaining popularity — Node.js. Let’s discover the reasons for its recognition, the benefits of using Node.js with Drupal, and the tool that helps you bring them together.

Read more

Responsive design brings a fascinating array of challenges to both designers and developers. Using background images in a call to action or blockquote element is a great way to add visual appeal to a design, as you can see in the image to the left.



However, at mobile sizes, you’re faced with some tough decisions. Do you try and stretch the image to fit the height of the container? If so, at very tall/narrow widths, you’re forced to load a giant image, and it likely won’t be recognizable.

In addition, forcing mobile users to load a large image is bad for performance. Creating custom responsive image sets would work, but that sets up a maintenance problem, something most clients will not appreciate.

Luckily, there’s a solution that allows us to keep the image aspect ratio, set up standard responsive images, and it looks great on mobile as well. The fade-out!

I’ll be using screenshots and code here, but I’ve also made all 6 steps available on CodePen if you want to play with the code and try out different colors, images, etc…



Let’s start with that first blockquote:

(pen) This is set up for desktop - the image aspect ratio is determining the height of the container using the padding ratio trick. Everything in the container is using absolute positioning and flexbox for centering. We have a simple rgba() background set using the :before pseudo-property in the .parent-container:

:before { content: ""; display: block; position: absolute; width: 100%; height: 100%; background-color: rgba(0,0,0,0.4); z-index: 10; top: 0; }



(pen) The issues arise once we get a quote of reasonable length, and/or the page width gets too small. As you can see, it overflows and breaks quite badly.



(pen) We can fix this by setting some changes to take place at a certain breakpoint, depending on the max length of the field and the size of the image used.

Specifically, we remove the padding from the parent element, and make the .content-wrapper position: static. (I like to set a min-height as well just in case the content is very small)



(pen) Now we can add the fader code - background-image: linear-gradient, which can be used unprefixed. This is inserted into the .image-wrapper using another :before pseudo-element:

:before { content: ""; display: inline-block; position: absolute; width: 100%; height: 100%; background-image: linear-gradient( // Fade over the entire image - not great. rgba(0, 0, 0, 0.0) 0%, rgba(255, 0, 0, 1.0) 100% ); };



(pen) The issue now is that the gradient covers the entire image, but we can fix that easily by adding additional rgba() values, in effect ‘stretching’ the part of the gradient that’s transparent:

:before { background-image: linear-gradient( // Transparent at the top. rgba(0, 0, 0, 0.0) 0%, // Still transparent through 70% of the image. rgba(0, 0, 0, 0.0) 70%, // Now fade to solid to match the background. rgba(255, 0, 0, 1.0) 100% ); }



(pen) Finally, we can fine-tune the gradient by adding even more rgba() values and setting the percentages and opacity as appropriate.

Once we’re satisfied that the gradient matches the design, all that’s left is to make the gradient RGBA match the .parent-container background color (not the overlay - this tripped me up for a while!), which in our case is supposed to be #000:


:before { background-image: linear-gradient( rgba(0, 0, 0, 0.0) 0%, rgba(0, 0, 0, 0.0) 70%, // These three 'smooth' out the fade. rgba(0, 0, 0, 0.2) 80%, rgba(0, 0, 0, 0.7) 90%, rgba(0, 0, 0, 0.9) 95%, // Solid to match the background. rgba(0, 0, 0, 1.0) 100% ); }

We’ll be rolling out sites in a few weeks with these techniques in live code, and with several slight variations to the implementation (mostly adding responsive images and making allowances for Drupal’s markup), but this is the core idea used.

Feel free to play with the code yourself, and change the rgba() values so that you can see what each is doing.

docker-console init --tpl drupal7 People who follow our blog already know that we’re using Docker at Droptica. We also already told you how easy it is to start a project using our docker-drupal application (https://www.droptica.pl/blog/poznaj-aplikacje-docker-drupal-w-15-minut-docker-i-przyklad-projektu-na-drupal-8/). Another step on the road to getting efficient and proficient with Docker is docker-console application, which is a newer version of docker-drupal, and exactly like its predecessor it was created in order to make building a working environment for Drupal simple and more efficient. How does it all work? You are going to see that in this write-up. Since we're all working on Linux (mainly on Ubuntu), all commands shown in this post were executed on Ubuntu 16.04.
Acquia Showcases Headless Drupal Development for Boreal Mountain Resort Acquia Showcases Headless Drupal Development for Boreal Mountain Resort Nate Gengler Tue, 06/06/2017 - 12:59

We recently launched our first decoupled Drupal site for Boreal Mountain Resort. Working closely with hosting platform, Acquia, and front end developers, Hoorooh Digital, we spun up rideboreal.com as a fully customized front end experience with the back-end framework of Drupal 8.

Our hosting partners, Acquia, recapped the build with a fantastic blog post. It offers an in-depth look at the working relationship between Elevated Third, Acquia and Hoorooh Digital.

There is always satisfaction in retracing the progression of a project from initial discovery to final site launch. But more than an excuse to pat ourselves on the back, reflecting on projects helps us improve. It gives us a sense of how we stack up against our original goals and provides context for future builds.
For more information on decoupled Drupal Development and other industry news, Acquia’s blog is an awesome resource. Check it out! 

 

 

 

If you want to add Google's reCaptcha (https://www.google.com/recaptcha/intro/index.html) to your Drupal 7 forms programmatically you need to follow these two steps:

1) Install and enable captcha (https://www.drupal.org/project/captcha) and recaptcha (https://www.drupal.org/project/recaptcha) modules. The best ...

Read now

"The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect."
- Tim Berners-Lee, W3C Director and inventor of the World Wide Web

As a community, Drupal wants to be sure that the websites and the features we build are accessible to everyone, including those who have disabilities. To be inclusive we must think beyond color contrasts, font scaling, and alt texts. Identifying the barriers and resolving them is fundamental in making the web inclusive for everyone.

Accessibility fosters social equality and inclusion for not just those with disabilities but also those with intermittent internet access in rural communities and developing nations.

The Bay Area is fortunate to have Mike Gifford visiting from Canada and he carries with him unique perspectives on web accessibility. Hook 42 has organized an evening with Mike for conversation, collaboration, and thought leadership surrounding Drupal Accessibility.

How do you import an RSS feed into entities with Drupal 8? In Drupal 6 and 7, you probably used the Feeds module. Feeds 7 made it easy (-ish) to click together a configuration that matches an RSS (or any XML, or CSV, or OPML, etc) source to a Drupal entity type, maps source data into Drupal fields, and runs an import with the site Cron. Where has that functionality gone in D8? I recently had to build a podcast mirror for a client that needed this functionality, and I was surprised at what I found.

Feeds module doesn’t have a stable release candidate, and it doesn’t look like one is coming any time soon. They’re still surveying people about what feeds module should even DO in D8. As the module page explains:

It’s not ready yet, but we are brainstorming about what would be the best way forward. Want to help us? Fill in our survey.
If you decide to use it, don’t be mad if we break it later.

This does not inspire confidence.

The next great candidate is Aggregator module (in core). Unfortunately, Aggregator gives you no control over the kind of entity to create, let alone any kind of field mapping. It imports content into its own Aggregated Content entity, with everything in one field, and linking offsite. I suppose you could extend it to choose you own entity type, map fields etc, but that seems like a lot of work for such a simple feature.

Frustrating, right?

What if I told you that Drupal 8 can do everything Feeds 7 can?

What if I told you that it’s even better: instead of clicking through endless menus and configuration links, waiting for things to load, missing problems, and banging your head against the mouse, you can set this up with one simple piece of text. You can copy and paste it directly from this blog post into Drupal’s admin interface.

What? How?

Drupal 8 can do all the Feedsy stuff you like with Migrate module. Migrate in D8 core already contains all the elements you need to build a regular importer of ANYTHING into D8. Add a couple of contrib modules to provide specific plugins for XML sources and convenience drush functions, and baby you’ve got a stew goin’!

Here’s the short version Howto:

1) Download and enable migrate_plus and migrate_tools modules. You should be doing this with composer, but I won’t judge. Just get them into your codebase and enable them. Migrate Plus provides plugins for core Migrate, so you can parse remote XML, JSON, CSV, or even arbitrary spreadsheet data. Migrate Tools gives us drush commands for running migrations.

2) Write your Migration configuration in text, and paste it into the configuration import admin page (admin/config/development/configuration/single/import), or import it another way. I’ve included a starter YAML just below, you should be able to copypasta, change a few values, and be done in time for tea.

3) Add a line to your system cron to run drush migrate -y my_rss_importer at whatever interval you like.

That’s it. One YAML file, most of which is copypasta. One cronjob. All done!

Here’s my RSS importer config for your copy and pasting pleasure. If you’re already comfortable with migration YAMLs and XPaths, just add the names of your RSS fields as selectors in the source section, map them to drupal fields in the process section, and you’re all done!

If you aren’t familiar with this stuff yet, don’t worry! We’ll dissect this together, below.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 id: my_rss_importer label: 'Import my RSS feed' status: true source: plugin: url data_fetcher_plugin: http urls: 'https://example.com/feed.rss' data_parser_plugin: simple_xml item_selector: /rss/channel/item fields: - name: guid label: GUID selector: guid - name: title label: Title selector: title - name: pub_date label: 'Publication date' selector: pubDate - name: link label: 'Origin link' selector: link - name: summary label: Summary selector: 'itunes:summary' - name: image label: Image selector: 'itunes:image[''href'']' ids: guid: type: string destination: plugin: 'entity:node' process: title: title field_remote_url: link body: summary created: plugin: format_date from_format: 'D, d M Y H:i:s O' to_format: 'U' source: pub_date status: plugin: default_value default_value: 1 type: plugin: default_value default_value: podcast_episode

Some of you can just stop here. If you’re familiar with the format and the structures involved, this example is probably all you need to set up your easy RSS importer.

In the interest of good examples for Migrate module though, I’m going to continue. Read on if you want to learn more about how this config works, and how you can use Migrate to do even more amazing things…

Anatomy of a migration YAML

Let’s dive into that YAML a bit. Migrate is one of the most powerful components of Drupal 8 core, and this configuration is your gateway to it.

That YAML looks like a lot, but it’s really just 4 sections. They can appear in any order, but we need all 4: General information, source, destination, and data processing. This isn’t rocket science after all! Let’s look at these sections one at a time.

General information

1 2 3 id: my_rss_importer label: 'My RSS feed importer' status: true

This is the basic stuff about the migration configuration. At a minimum it needs a unique machine-readable ID, a human-readable label, and status: true so it’s enabled. There are other keys you can include here for fun extra features, like module dependencies, groupings (so you can run several imports together!), tags, and language. These are the critical ones, though.

Source

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 source: plugin: url data_fetcher_plugin: file urls: 'https://example.com/feed.rss' data_parser_plugin: simple_xml item_selector: /rss/channel/item fields: - name: guid label: GUID selector: guid - name: title label: Item Title selector: title - name: pub_date label: 'Publication date' selector: pubDate - name: link label: 'Origin link' selector: link - name: summary label: Summary selector: 'itunes:summary' ids: guid: type: string

This is the one that intimidates most people: it’s where you describe the RSS source. Migrate module is even more flexible than Feeds was, so there’s a lot to specify here… but it all makes sense if you take it in small pieces.

First: we want to use a remote file, so we’ll use the Url plugin (there are others, but none that we care about right now). All the rest of the settings belong to the Url plugin, even though they aren’t indented or anything.

There are two possibilities for Url’s data_fetcher setting: file and http. file is for anything you could pass to PHP’s file_get_contents, including remote URLs. There are some great performance tricks in there, so it’s a good option for most use cases. We’ll be using file for our example. http is specifically for remote files accessed over HTTP, and lets you use the full power of the HTTP spec to get your file. Think authentication headers, cache rules, etc.

Next we declare which plugin will read (parse) the data from that remote URL. We can read JSON, SOAP, arbitrary XML… in our use case this is an RSS feed, so we’ll use one of the XML plugins. SimpleXML is just what it sounds like: a simple way to get data out of XML. In extreme use cases you might use XML instead, but I haven’t encountered that yet (ever, anywhere, in any of my projects). TL;DR: SimpleXML is great. Use it.

Third, we have to tell the source where it can find the actual items to import. XML is freeform, so there’s no way for Migrate to know where the future “nodes” are in the document. So you have to give it the XPath to the items. RSS feeds have a standardized path: /rss/channel/item.

Next we have to identify the “fields” in the source. You see, migrate module is built around the idea that you’ll map source fields to destination fields. That’s core to how it thinks about the whole process. Since XML (and by extension RSS) is an unstructured format – it doesn’t think of itself as having “fields” at all. So we’ll have to give our source plugin XPaths for the data we want out of the feed, assigning each path to a virtual “field”. These “fake fields” let Migrate treat this source just like any other.

If you haven’t worked with XPaths before, the example YAML in this post gives you most of what you need to know. It’s just a simple text system for specifying a tag within an unstructured XML document. Not too complicated when you get into it. You may want to find a good tutorial to learn some of the tricks.

Let’s look at one of these “fake fields”:

1 2 3 name: summary label: Summary selector: 'itunes:summary'

name is how we’ll address this field in the rest of the migration. It’s the source “field name”. label is the human readable name for the field. selector is the XPath inside the item. Most items are flat – certainly in RSS – so it’s basically just the tag that surrounds the data you want. There, was that so hard?

As a side note, you can see that my RSS feeds tend to be for iTunes. Sometimes the world eats an apple, sometimes an apple eats the world. Buy me a beer at Drupalcon and we can argue about standards.

Fifth and finally, we identify which “field” in the source contains a unique identifier. Migrate module keeps track of the association between the source and destination objects, so it can handle updates, rollbacks, and more. The example YAML relies on the very common (but technically optional) <guid> tag as a unique identifier.

Destination

1 2 destination: plugin: 'entity:node'

Yep, it’s that simple. This is where you declare what Drupal entity type will receive the data. Actually, you could write any sort of destination plugin for this – if you want Drupal to migrate data into some crazy exotic system, you can do it! But in 99.9% of cases you’re migrating into Drupal entities, so you’ll want entity:something here. Don’t worry about bundles (content types) here; that’s something we take care of in field mapping.

Process

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 process: title: title field_remote_url: link body: summary created: plugin: format_date from_format: 'D, d M Y H:i:s O' to_format: 'U' source: pub_date status: plugin: default_value default_value: 1 type: plugin: default_value default_value: podcast_episode

This is where the action happens: the process section describes how destination fields should get their data from the source. It’s the “field mapping”, and more. Each key is a destination field, each value describes where the data comes from.

If you don’t want to migrate the whole field exactly as it’s presented in the source, you can put individual fields through Migrate plugins. These plugins apply all sorts of changes to the source content, to get it into the shape Drupal needs for a field value. If you want to take a substring from the source, explode it into an array, extract one array value and make sure it’s a valid Drupal machine name, you can do that here. I won’t do it in my example because that sort of thing isn’t common for RSS feeds, but it’s definitely possible.

The examples of plugins that you see here are simple ones. status and type show you how to set a fixed field value. There are other ways, but the default_value plugin is the best way to keep your sanity.

The created field is a bit more interesting. The Drupal field is a unix timestamp of the time a node was authored. The source RSS uses a string time format, though. We’ll use the format_date plugin to convert between the two. Neat, eh?

Don’t forget to map values into Drupal’s status and type fields! type is especially important: that’s what determines the content type, and nodes can’t be saved without it!

That’s it?

Yes, that’s it. You now have a migrator that pulls from any kind of remote source, and creates Drupal entities out of the items it finds. Your system cron entry makes sure this runs on a regular schedule, rather than overloading Drupal’s cron.

More importantly, if you’re this comfortable with Migrate module, you’ve just gained a lot of new power. This is a framework for getting data from anywhere, to anywhere, with a lot of convenience functionality in between.

Happy feeding!

Tips and tricks

OK I lied, there is way more to say about Migrate. It’s a wonderful, extensible framework, and that means there are lots of options for you. Here are some of the obstacles and solutions I’ve found helpful.

Importing files

Did you notice that I didn’t map the images into Drupal fields in my example? That’s because it’s a bit confusing. We actually have an image URL that we need to download, then we have to create a file entity based on the downloaded file, and then we add the File ID to the node’s field as a value. That’s more complicated than I wanted to get into in the general example.

To do this, we have to create a pipeline of plugins that will operate in sequence, to create the value we want to stick in our field_image. It looks something like this:

1 2 3 4 5 6 7 8 9 field_image: - plugin: download source: - image - constants/destination_uri rename: true - plugin: entity_generate

Looking at that download plugin, image seems clear. That’s the source URL we got out of the RSS feed. But what is constants/destination_uri, I hear you cry? I’m glad you asked. It’s a constant, which I added in the source section and didn’t tell you about. You can add any arbitrary keys to the source section, and they’ll be available like this in processing. It is good practice to lump all your constants together into one key, to keep the namespace clean. This is what it looks like:

1 2 3 4 source: ... usual source stuff here ... constants: destination_uri: 'public://my_rss_feed/post.jpg'

Before you ask, yes this is exactly the same as using the default_value plugin. Still, default_value is preferred for readability wherever possible. In this case it isn’t really possible.

Also, note that the download plugin lets me set rename: true. This means that in case of a name conflict, a 0, 1, 2, 3 etc will be added to the end of the filename.

You can see the whole structure here, of one plugin passing its result to the next. You can chain unlimited plugins together this way…

Multiple interrelated migrations

One of the coolest tricks that Migrate can do is to manage interdependencies between migrations. Maybe you don’t want those images just as File entities, you actually want them in Paragraphs, which should appear in the imported node. Easy-peasy.

First, you have to create a second migration for the Paragraph. Technically you should have a separate Migration YAML for each destination entity type. (yes, entity_generate is a dirty way to get around it, use it sparingly). So we create our second migration just for the paragraph, like this:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 id: my_rss_images_importer label: 'Import the images from my RSS feed' status: true source: plugin: url data_fetcher_plugin: http urls: 'https://example.com/feed.rss' data_parser_plugin: simple_xml item_selector: /rss/channel/item fields: - name: guid label: GUID selector: guid - name: image label: Image selector: 'itunes:image[''href'']' ids: guid: type: string constants: destination_uri: 'public://my_rss_feed/post.jpg' destination: plugin: 'entity:paragraph' process: type: plugin: default_value default_value: podcast_image field_image: - plugin: download source: - image - constants/destination_uri rename: true - plugin: entity_generate

If you look at that closely, you’ll see it’s a simpler version of the node migration we did at first. I did the copy pasting myself! Here are the differences:

  • Different ID and label (duh)
  • We only care about two “fields” on the source: GUID and the image URL.
  • The destination is a paragraph instead of a node.
  • We’re doing the image trick I just mentioned.

Now, in the node migration, we can add our paragraphs field to the “process” section like this:

1 2 3 4 field_paragraphs: plugin: migration_lookup migration: my_rss_images_importer source: guid

We’re using the migration_lookup plugin. This plugin takes the value of the field given in source, and looks it up in my_rss_images_importer to see if anything with that source ID was migrated. Remember where we configured the source plugin to know that guid was the unique identifier for each item in this feed? That comes in handy here.

So we pass the guid to migration_lookup, and it returns the id of the paragraph which was created for that guid. It finds out what Drupal entity ID corresponds to that source ID, and returns the Drupal entity ID to use as a field value. You can use this trick to associate content migrated from separate feeds, totally separate data sources, or whatever.

You should also add a dependency on my_rss_images_importer at the bottom of your YAML file, like this:

1 2 3 migration_dependencies: required: - my_rss_images_importer

This will ensure that my_rss_images_importer will always run before my_rss_importer.

(NB: in Drupal < 8.3, this plugin is called migration)

Formatting dates

Very often you will receive dates in a format other than what Drupal wants to accept as a valid field value. In this case the format_date process plugin comes in very handy, like this:

1 2 3 4 5 field_published_date: plugin: format_date from_format: 'D, d M Y H:i:s O' to_format: 'Y-m-d\TH:i:s' source: pub_date

This one is pretty self-explanatory: from format, to format, and source. This is important when migrating from Drupal 6, whose date fields store dates differently from 8. It’s also sometimes handy for RSS feeds. :)

Drush commands

Very important for testing, and the whole reason we have migrate_plus module installed! Here are some handy drush commands for interacting with your migration:

  • drush ms: Gives you the status of all known migrations. How many items are there to import? How many have been imported? Is the import running?
  • drush migrate-rollback: Rolls back one or more migrations, deleting all the imported content.
  • drush migrate-messages: Get logged messages for a particular migration.
  • drush mi: Runs a migration. use --all to run them all. Don’t worry, Migrate will sort out any dependencies you’ve declared and run them in the right order. Also worth noting: --limit=10 does a limited run of 10 items, and --feedback=10 gives you an in-progress status line every 10 items (otherwise you get nothing until it’s finished!).

Okay, now that’s really it. Happy feeding!

GSoC 2017 | Week 1: Port Vote Up/Down sudhanshu Wed, 06/07/2017 - 11:41

Creating a responsive mega menu is often a regular prerequisite on any project, Drupal 8 or other. And if we can find some solutions offering to create mega menus easily, very often these solutions remain quite rigid and can hardly be adapted to the prerequisites of a project. But what is a mega menu? It is nothing more than a menu that contains a little more than a list of links (proposed by the menu system of Drupal 8), with specific links, text, images, call to actions, etc.

Drupal 8 is a powerful and customizable CMS.

It provides a lot of different tools to add, store, and visualize data, however spatial data visualization is a sophisticated and complicated topic — and Drupal hasn't always been the best option for handling it.

Because of its complexity, spatial data requires a specific process to become visual. We often think of a map with some pins or location points, but there are much more complex edge cases where Drupal is not able to solve the end user needs, such as rendering thousands of points or complex geometries in a map, or trying to create heatmaps based on stored data.

That’s why it's important to acknowledge that Drupal is not a golden hammer and the use of third party services will help us to provide a much better and appropriate user experience. This is where we introduce CARTO.

CARTO is a powerful spatial data analysis platform that provides different services related to the geographical information stored in a spatial database in the cloud. The fact that the base of all this process is a database table makes the connection between Drupal and CARTO simple and fairly straightforward.

From our point of view, two of the most useful tools provided by CARTO to be integrated within Drupal are the Import API and Builder. (There are some other ones that are interesting for more advanced users).

  • Import API allows you to upload files to a CARTO account, check on their current upload status, as well as delete and list importing processes on a given account.
  • CARTO Builder is a web-based drag and drop analysis tool for analysts and business users to discover and predict key insights from location data. Maps generated with this tool can be shared or embedded in any website.

So, at this point we have two systems — Drupal and CARTO — with the following features:

  • Drupal, a very capable tool to create, store, and establish relationships between content
  • CARTO, a powerful platform able to import spatial data, process it and generate amazing performance maps that can be shared
  • Drupal Media, an ecosystem that allows embedding and integrating external resources as entities

The problem is generating powerful maps and including them in any Drupal site.

First, the data stored in Drupal has to be pushed to CARTO. Then the maps are generated in CARTO before being embedded in Drupal. 

This can now easily be done using CARTO Sync and Media entity CARTO, both Drupal modules.

  • CARTO Sync allows Drupal Views to generate who's results can be pushed to CARTO to be processed
  • Media Entity CARTO integrates CARTO Builder shared maps within the Media ecosystem and allows to create Map entities to be referenced or embedded in any Drupal content

Following this method, we can still use Drupal as the CMS, while taking advantage of all the features that CARTO provides in order to represent accurate spatial information.

If you find this topic interesting, please take a look at the slides or recording from the presentation at DrupalCamp Madrid 2017.

Tags: Drupal PlanetCARTODrupal 8spatial datamapping

First off, I want emphasize that the below blog post is my opinion and personal feelings as someone who has spent the past year building the Webform 8.x-5.x module for the Drupal community. Now I want to see it continue to grow and flourish. There are many thought leaders, including Dries, that have contemplated and publicly discussed the concept and benefits of crowdfunding and have used this approach to fund Drupal 8 core and module development.

Drupal 8 was, and still is, a monolithic accomplishment - one that continues to be an ambitious undertaking to maintain and improve. The Drupal community might still be waiting for Drupal 8 to be released if organizations did not crowdfund and accelerate D8. It is our togetherness, our pooling of our resources, that allows us to accomplish great things, like Drupal. At the same time, the Drupal community is made up of a network of relationships and collaborations. Drupal and Open-source’s success depends on its collaborative community, which is driven by relationships. Crowdfunding solves a big problem, pooling resources to fund open source, but it does not build relationships. Drupal's strength lies in its community, bonded together by healthy and productive relationships.

I feel that crowdfunding, especially within the Drupal contributed project space, is just handing out fish without teaching project maintainers how to fish or even companies how to properly hand out fish. Crowdfunding Drupal projects does not build relationships between project maintainers and organizations/companies. The most obvious issue is that crowdfunding typically has a limited number of fish. Conversely, dozens of companies are throwing fish, aka money, into a pool that is drained by project maintainers, who don't even know the origin of this particular fish. Finally, the most...Read More

GenomeWeb brandt Wed, 06/07/2017 - 11:29 Increasing Engagement Using Segmented Content

Using Domain Access to manage content between multiple sites.

Highlights
  • Multi-headed Drupal architecture

  • Audience segmentation using domain-specific registrations

  • Efficient editorial and user management workflows

We want to make your project a success.

Let's Chat. Our Client

GenomeWeb is an independent news organization that provides online reporting on genomic technologies. Historically they have focused on this very narrow niche of the bio industry, and they are the leading news site in that particular field. Their site has an active community with over 200,000 users and about 20 new articles being published daily.

Over time GenomeWeb saw that the technologies they were covering were moving very quickly into healthcare and diagnostics, and they wanted to expand their news coverage into the molecular diagnostics space.

Goals and Direction

Instead of adding new content directly to the existing site, GenomeWeb wanted to create a new sister site to be located at www.360Dx.com, which would include existing diagnostic content and also new coverage that could be marketed to a broader diagnostics audience. The new site would host less technical and more business-focused content, as well as share content with the current GenomeWeb site.

Goals for the new 360Dx site and multi-headed architecture:

  • Content from each site should be easily accessible for both sets of audiences.
  • New clinical content should only live on 360Dx.
  • Sites should keep the same user database. If someone is a user on GenomeWeb, they should have the same level of access on the new 360 site. This means paying for a premium level of access on one site would grant users premium access on the other.

“It was a very complex project. The site was already complicated to begin with.” — Bernadette Toner, CEO

How We Helped

To extend their business model to another site, Palantir used the Domain module suite to enable editors to assign content to both genomeweb.com and 360Dx.com. With Domain, the two sites can share some content and cross-promote articles to new audiences while having unique themes and settings.

The team developed a new derivative theme for 360Dx.com and ensured that content, users, and views were assigned to the proper domain. This work included analysis of existing modules and content, the creation and testing of update scripts, and configuration of domain-specific settings for analytics, ads, and other services. We also worked with the GenomeWeb team to integrate domains into their memberships, so that users could subscribe to email news bulletins from either or both sites independently.

The new site structure we created had very intuitive workflows, which meant the GenomeWeb team did not need extensive training to learn the new functionality. We worked to ease deployment and updates using the Features modules and through documentation of domain configurations.

The Results

The new multi-headed Drupal architecture created multiple wins for GenomeWeb. There is a wealth of content between their two sites, and by using Domain Access they are able to easily manage it all in one place. It has been easy for editors to post content and decide if it should go to one site or both, and there hasn’t been a huge change in their daily workflow.

The new architecture also allows GenomeWeb to engage with their audience on a deeper level: by having different kinds of registrations for each site, GenomeWeb is able to collect different demographics and target specific segments of their audience with more data. Although the site is still new, GenomeWeb has met their initial projections, and they anticipate being able to personalize their efforts even more as more data compiles.

“The new site works as we envisioned, which doesn’t always happen. The Palantir team listened to what we needed and was able to make it happen, and we are really, really happy with the results.” — Bernadette Toner, CEO

“The new site works as we envisioned, which doesn’t always happen. The Palantir team listened to what we needed and was able to make it happen, and we are really, really happy with the results.” Bernadette Toner, CEO

We want to make your project a success.

Let's Chat. Drupal 8 Services development genomeweb.com 360Dx.com

Pages