Drupal Feeds

Over the past couple of months, I have been facing a similar situation where many developers are unable to set up a local dev box for Drupal 8 on Pantheon server. Pantheon plays a significant role in hosting a website as it has fast set-up to local and best use of drush to accelerate administrative and development tasks for Drupal sites.

For an introduction to drush command, you can check our previous blog post where we have explained about writing custom Drush commands in Drupal and installing Drupal with Drush. In this post, I will show you how to set up local dev box for Drupal 8. This blog…

A Preview of DrupalCon 2017 in Vienna Dmitrii Susloparov Wed, 09/06/2017 - 16:01

DrupalCon Vienna must be music to the ears of Drupal developers. Every year, Drupal developers flock to DrupalCon to collaborate, network, and learn in a beautiful urban European or North American location, with the objectives of supporting the Drupal community and furthering their Drupal careers. Year 2017 will be no exception. The popular annual European version of the event will be held in Vienna, the grand Austrian capital, September 26-29, 2017. Vardot, a long-time contributor in the Drupal community and sponsor of DrupalCon Vienna will be there.

 

Why you should attend DrupalCon Vienna

There are many reasons for attending DrupalCon Vienna. If you are new to Drupal 8, you definitely want to come to soak up all things Drupal 8 this year. In fact, Drupal developers have 132 reasons to attend DrupalCon Vienna, one per accepted conference session. The excellent sessions rank high in most developers’ checklist. For the 2017 conference, a whopping total of over 500 session proposals were submitted. Acceptance standards were most rigorous, and only 132 sessions, or 26%, were accepted for DrupalCon Vienna. Attendees will not be disappointed by the quality of the carefully pre-selected sessions.

 

The accepted sessions together make up 108 hours of quality learning opportunities for attendees. To help you find the sessions that interest you the most, the sessions are classified into 13 tracks, covering the entire spectrum of topics of interest to the Drupal community. The top 4 tracks with the most submitted session proposals in DrupalCon Vienna are: Being Human, Coding and Development, Business, and Site Building.

 

Being human

While 3 of the 4 above mentioned session tracks are reasonably self-explanatory, the Being Human track perhaps needs some explanation. This track covers the human dynamics in a Drupal project and community. Speakers are encouraged to share personal anecdotes to illustrate principles of maintaining a healthy community and project. Leadership, mentorship, gender gap, work-life balance are all key ideas in the Being Human track.

 

Specifically, three of the sessions in this track draw my attention. Two are related to promoting diversity in the Drupal workspace, and to a large context, the Drupal community. The Debugging the Gender Gap session addresses the current gender imbalance in the Drupal industry, and suggests some solutions to correct the situation. The From a Single Fighter to a Team Player session makes an effort to bridge gaps of a different kind, namely, cultural and language gaps. The speaker relates back to his personal experience as a visible minority in the European tech industry. To paraphrase him, how a job is done is more important than doing a perfect job.

 

Drupal is different (and better) than a lot of other open-source projects because of its vision and commitment to be an open and inclusive community. These sessions are steps toward fulfilling that vision.

 

Coding and Development

DrupalCon marks almost 2 years since the release of Drupal 8. So, it is not surprising that DrupalCon Vienna sessions, including the Coding and Development track, are almost exclusively Drupal-8-centric. If you are still in the process of migrating to Drupal 8, it is not too late. Migrate Everything into Drupal 8 and Doctor, Will My Drupal 7 Commerce Site Survive the Upgrade? are 2 sessions that you should not miss.

 

One huge benefit of attending DrupalCon Vienna is to learn the latest practical development tips and techniques. Drupal developers will pick up valuable debugging knowledge from the Wait, there’s more! - Advanced debugging tactics session. I also like 2 other sessions on testing. Improved development process with better QA approach will frame a good overall mindset on testing, while Testing small to medium size client projects with Behat will drill into a specific test tool.

 

Business

Unless you are a pure Drupal hobbyist, sooner or later, you will have to figure out how to make your Drupal business viable. Pinpointing star sessions in the Business track is difficult because it depends on where you are in the life cycle of a Drupal business.

 

If you are an entrepreneur about to start a new Drupal venture, I’d recommend Co-operative Drupal: Growth & Sustainability through Worker Ownership. Here, you are challenged to make every employee an owner of the company. The coop ownership model is still very much a novelty in the Drupal industry. However, the speaker will argue for its merits, and share personal success stories.

 

If your objective is to grow your existing Drupal business, then sales and marketing is perhaps your focus in this track. You can weigh whether accessibility is an applicable value that you can sell to your clients, as suggested by the Accessibility as a Business Proposition session. You can also learn valuable lessons on how to build a sales team from the session entitled Is Selling Drupal an Art or a Science?

 

Last but not least, if you are running a well-established Drupal business and pondering on the next step, then How to go from one to seven companies around the world and how to run them is a must-attend for you. The speakers will present the challenges of diversifying a successful company and how they met the challenges head on.

 

Site building

Decoupled Drupal (aka headless Drupal) has the potential to effect a paradigm shift in how websites are built. Essentially, the idea is to separate content (the Drupal CMS backend) from the display frontend. The Site Building track in DrupalCon Vienna includes 2 sessions which feature the headless architecture: Decoupled site building: Drupal's next challenge, and

Headless, stateless, DB-less: how Kurier.at is transforming digital production with Drupal, NodeJS and Platform.sh. These sessions not only introduce the possibilities and implications of such an architecture, but also point to some working examples. Site builders not familiar with the idea should definitely attend at least 1 of the sessions.

 

Other tracks (and sprints too)

Kudos to DrupalCon Vienna for the breadth of topics covered. Besides the above 4 tracks, developers will also be attracted to the Core Conversations, DevOps, Front End, Performance and Scaling, PHP, and Symfony tracks. And, if you want to step temporarily away from the programming side of Drupal, you will be stimulated by the Project Management, Drupal Showcase, and Horizons tracks.

 

While the formal sessions are great, you may want to add some activities that are more participatory in nature. DrupalCon Vienna has planned for that as well. Besides the formal sessions, there are also Birds of a Feather (aka BoF) sessions and Sprints.

 

BoFs are informal gatherings during DrupalCon Vienna on a specific Drupal topic, but without a pre-planned agenda. This allows attendees to collaborate and share their ideas freely and organically on a target topic. BoFs are fun and their outcome often unpredictable. In contrast, Sprints are hands-on sessions to tackle specific focused tasks for the Drupal project. Example activities include bug squashing, specifying a new feature, refactoring a small module, etc. Both BoFs and Sprints are very popular among attendees and can fill up quickly.

 

Have fun

DrupalCon Vienna offers more than just sessions. You can sprinkle DrupalCon Vienna with social events in order to network with fellow Drupal community members. And what better backdrop to befriend them than Vienna, a city of music, art, culture, and fine cuisine.

 

Fellow developers, when you attend DrupalCon Vienna this coming September, drink up on coffee because you are going to need it with so many good activities for your enjoyment and career development. And if you bump into someone from Vardot at the coffee lineup between sessions, don't forget to say hello - we’re always happy to see drupalists around.

 

And what sessions of DrupalCon Vienna are you planning to attend? Which ones are the most attractive for you and why? Share with us your opinion in the comment section below. See you soon in Austria!

Maestro D8 Concepts Part 2: The Workflow Engine's Internals randy Wed, 09/06/2017 - 09:54

The Maestro Engine is the mechanism responsible for executing a workflow template by assigning tasks to actors, executing tasks for the engine and providing all of the other logic and glue functionality to run a workflow.  The maestro module is the core module in the Maestro ecosystem and is the module that houses the template, variable, assignment, queue and process schema.  The maestro module also provides the Maestro API for which developers can interact with the engine to do things such as setting/getting process variables, start processes, move the queue along among many other things.

Xeno Media lead developer Michael Porter was selected to present Automating Putting Jenkins To Work For You at Drupal Camp St. Louis, Saturday, Sept. 23. Michael is a seasoned speaker and an expert on automation and testing.

Michael will explain how can we use the power of Continuous Integration (CI) servers for offloading some of the repetitive tasks developers and software maintainers need to do on a daily basis. Running core and module updates, unit tests and reporting can be automated and communicated using tools he will outline in this presentation.

We chose Jenkins as a Continuous Integration Server because it is:

  • Well documented
  • Open Source
  • Widely Used
  • Extensible

In this session, Michael will demonstrate how to use Jenkins to automate:

  • DB backups
  • A multibranch Stage/Testing Server
  • Behat tests
  • Coding standards tests
  • SiteSpeed.io reports

The fourth annual Drupal Camp St. Louis is going to be held September 22nd and 23rd at the University of Missouri - St. Louis.  Learn more at https://2017.drupalstl.org.

No, this is not a joke.

Decoupled Developer Days took place in New York City on August 19th and 20th and was hosted by NBCUniversal at 30 Rock in the heart of Rockefeller Plaza.

This was a first time event. Mediacurrent was a Gold Sponsor and several colleagues were going to be in attendance including Co-Lead Organizer Matt Davis with whom I’m currently working with on a client project.

Only 2 More Days to Submit Your Session Grace Lovelace Tue, 09/05/2017 - 2:13pm

Everybody has a unique Drupal story to tell, and we want to hear yours.

Submissions for sessions close on September 6 at 11:59 pm PT

Submit your session today!

We’re particularly interested in hearing from those who have used Drupal in interesting, innovative, and significant ways.

We’re also looking for dynamic and versatile speakers to talk about how they use Drupal to grow their organization, develop modules, tell their stories, and much much more.

There's much we can learn from others who haven't traditionally spoken to our community. Your voice is welcome here.  If you work on other Open Source projects or web design and development in general, and think the Drupal community could benefit from your story, we want it told at BADCamp.

Submit Your Session

Sponsors

A BIG thanks to our sponsors who have committed early. Without them this magical event wouldn’t be possible. We are also looking for MORE sponsors to help keep BADCamp free and awesome. Interested in sponsoring BADCamp? Contact anne@badcamp.net

 

Tags: Drupal Planet

Today, there were two security advisories posted for modules that have Drupal 6 versions:

Happily, neither issue affects the Drupal 6 version of the modules!

I think this is particularly important for the Critical issue in Clientside Validation. Anyone who uses the Drupal 7 version of that module should update immediately! But, this time, Drupal 6 users can rest easy. :-)

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on Drupal.org).

Sometimes you need to pull in content or data on an ongoing basis from a third-party product or website. Maybe you want to pull in a list of books from Amazon, or show some products from your Shopify store. You may need all the flexibility of nodes in Drupal, but you don’t want to copy content manually, and you don’t want to be forced to move away from those other systems that are already serving your needs.

Here’s a recipe for synchronizing content from outside websites or products – in our case, Eventbrite – using the Migrate module.

But First, A Few Alternatives

In our specific project, we considered a few alternatives before landing on Migrate. We could have reimplemented Eventbrite's functionality in Drupal. However, we didn’t want to abandon the product (Eventbrite) that was already meeting our client’s needs perfectly. We just needed to pull in the content itself, without having to manage it in multiple places. We also considered a client-side application like Vue.js or React to simply re-present their Eventbrite content on the site in a seamless manner. But with that approach, we would lose the flexibility of storing events as nodes and would need to reinvent many of the features which Drupal gives us for free, like clean URLs, Views, Search, fine-grained caching, and more.

What we really needed was a continuous content synchronization between Eventbrite and Drupal that would leverage Eventbrite for content entry and event management and Drupal for a seamless integration with the rest of their site. But, how to do it?

Enter the Migrate Module

But Migrate is just for moving old sites into new ones, right? The reality is Migrate comes with a plethora of excellent, built-in, plugins which makes importing content a breeze. Moreover, it has all the necessary concepts to run migrations on a schedule without importing anything that’s not new or updated. While it’s often overlooked, Migrate is a perfect tool for synchronization of content as much as it is a perfect tool for one-time migration of content.

In Drupal 7, the feeds module was often used for these kinds of tasks. Feeds isn’t as far along in Drupal 8 and Migrate is now a much more flexible platform on which to build these kinds of integrations.

Important Concepts

In order to understand how to use Migrate as a content synchronization tool, you’ll first need to understand a few important concepts about how Migrate is built. Migrate module makes liberal use of Drupal 8 plugins, making it incredibly flexible, but also a little hard to understand at first. Especially when coming directly from Drupal 7.

Migrations are about taking arbitrary data from one bucket of content and funneling it into a new Drupal-based bucket of content. In Migrate-speak, the first bucket of data is your data "source." Your Drupal site is then the "destination."

Between those two buckets – your source and your destination – you may need to manipulate or alter the data to make it compatible with your Drupal content types and fields. In Migrate, this is called a "processor." For example, you may need to transform a Unix timestamp from your source data into a formatted date, or make taxonomy terms out of a list of strings. Migrate lets you describe one or more "processing pipelines" for each field of the data you'll be importing.

These are the three key components we'll be working with:

  1. "source" plugins (to fetch the data to import)
  2. "process" plugins (to transform that data into something easier to use)
  3. "destination" plugins (to create our Drupal nodes).
The "Source" Plugin

Migrate already comes with a few source plugins out-of-the-box. They can plug into generic data sources like a legacy SQL database, CSV, XML, or JSON files. However, what we needed for our client was to integrate with a somewhat non-standard JSON-based API. For that, you’ll need to write a custom source plugin.

Q: How? A: SourcePluginBase and MigrateSourceInterface.

When implementing a source plugin, you’ll need to extend from SourcePluginBase and implement all the methods required by MigrateSourceInterface.

SourcePluginBase does much of the heavy lifting for you, but there remains one essential method you must write yourself, and it is by far the most complicated step of this entire effort. You’ll need to implement the initializeIterator() method. This method must return something that implements PHP’s built-in \Iterator interface. In a custom migration, connecting to a custom API, you’ll need to write your own custom class which implements this interface. An iterator is an object which can be used in a foreach in place of a PHP array. In that respect, they’re very much the same. You can write:

foreach ($my_iterator as $key => $value) { // $my_iterator might as well be an array, because it behaves the same here. }

That’s where the similarity ends. You can’t assign values to an iterator, and you can’t arbitrarily look up a key. You can only loop over the iterator from beginning to end.

In the context of the Migrate module, the iterator is what provides each result row, or each piece of content, to be imported into your Drupal site. In the context of our Eventbrite implementation, our iterator is what made requests to the Eventbrite API.

There are five methods which every class that implements \Iterator must have:

  1. current() - This returns the current row of data. Migrate expects this to return an array representing the data you’ll be importing. It should be raw and unprocessed. We can clean it up later.
  2. key() - This returns the current ID of the data. Migrate expects this to be the source ID of the row to be imported.
  3. next() - This advances the iterator one place. This will be called before current() is called. You should prepare your class to return the next row of data the next time current() is called. In the context of the Eventbrite API, this could mean advancing one item in the returned JSON response from the Eventbrite API. However, Eventbrite’s API is paginated, it was in this method that, when we had no more rows in the current page, we would make a new HTTP request for the next page of JSON data and set up our class to return the next row of data.
  4. rewind() - This resets the Iterator so that it can be looped over anew. This clears out current data and sets up the next call to the current() method to return the first result row.
  5. valid() - This indicates when the iteration is complete, i.e. when there are no more rows to return. This method returns TRUE until you’ve returned every result. When you have nothing left to return after a call to next(), you should return FALSE to tell Migrate that there is nothing left to import.

I’m not going to go into the specifics of each method here; it is highly variable and entirely dependent on the source of your migration. Is your third-party API JSON-based or XML-based, etc.? Plus, if you’re here for Eventbrite, I’ve already done all the hard work for you! I’ve made all the Eventbrite code I wrote public on Github.

Once you’ve built your iterator, the rest of it should be smooth sailing. You’ll still need to implement the remaining methods for MigrateSourceInterface, each of which is more extensively documented on Drupal.org.

  • fields() - A list of fields available for your source rows. This is usually the top-level keys of the array returned by your Iterator’s current() method
  • getIds() - This returns the uniquely identifying fields and some schema information for your source data. E.g. user and user_id from some arbitrary data source
  • __toString() - This is usually just a human-readable name for your Migration, something like, “My Custom Migration Source”

Once you have all this done, you’re ready to set up a migration YAML file and almost all your custom PHP is already written.

Much of the documentation about migrations that exist today tells you to install the Migrate Plus module at this point. Migrate Plus gives you some nice Drush commands and specifies how you should place and define your migrations. Honestly, I found it completely confusing and, for our use-case, entirely unnecessary. It’s a rabbit hole I wouldn’t go down. Migrate itself comes with everything we need.

To define a migration, i.e. the YAML which tells the Migrate module which plugins to use and how to map your source data into your destination content types, you’ll need to place a file in a custom module under a directory named migration_templates. For me, I named this file eventbrite.yml, but you may name it how you want. Just make sure that the id that you define in YAML matches the filename.

The five top-level keys you must define in this file are:

  1. id: The machine ID for the migration, matching your filename
  2. label: The human-readable name of the Migration, in my case, “Eventbrite”
  3. source: This is where we tell the Migrate module to use our custom source plugin, more on that below
  4. destination: This is the plugin that tells migrate which plugin to map your content into. Usually, this will be entity:node
  5. process: This tells migrate how to map source data values into fields in your destination content. We’ll discuss that below as well

The source key tells the Migrate module which plugin will provide the source data that it needs to migrate or import. In our case, it looked like this:

source: plugin: eventbrite

Where the eventbrite string must match the plugin id defined by an annotation on our custom MigrateSourcePlugin. Ours looked like this:

/** * @MigrateSource( * id = "eventbrite" * ) */ class EventbriteSource extends SourcePluginBase … omitted ...

The process key is the third and last component of our custom migration. Briefly, you use this section to map your source fields into your destination fields. As a simple example, if your source data has a key like “name,” you might map that to “title” for a node. Of all the Migrate documentation, this section on process plugins is by far the most well-documented and I referenced it extensively.

The biggest misunderstanding I’ve seen about the process section is how powerful “pipelines” and ProcessPlugins can be. Do not worry about cleaning up and processing your data in your custom iterator. Instead, do it with ProcessPlugins and pipelines.

The documentation page for how to write a ProcessPlugin is incredibly short. That said, ProcessPlugins are incredibly easy to write. First, create a new class with a file named like: /src/Plugin/migrate/process/YourClassName.php. Your class should extend the ProcessPluginBase class. You only need to implement one method: transform().

The transform() method operates on each value, of each field, on a single result row. Thus, if your source data returns an array of strings for a field named “favorite_flavors” on a chef’s user profile, the transform method will be called once for each string in that array.

The idea is simple, the transform method takes $value as its first argument, does whatever changes it needs to, then returns the processed value. E.g., if you wanted to translate every occurrence of the word “umami” to a less pretentious word like “savory,” you would return the string “savory” every time $value was equal to “umami”.

By composing different processors and understanding what already comes free with the Migrate module (see the list of built-in processors), complicated migrations become much simpler to reason about as complexity grows.

Continuity

The single biggest differentiating factor between using Migrate for a legacy site migration and using Migrate for content synchronization is that you’ll run your migrations continuously on a regular interval. Usually, something like every 30 minutes or every hour. In order to run your migration continuously, it’s important for your migration to know a few things:

What has already been migrated What, if anything, has been updated since the last run What is new

When the Migrate module can answer these questions, it can optimize the migration so it only imports or updates what needs to be changed, i.e., it doesn’t import the same content over and over.

To do this, you need to specify one of two methods for answering these questions. You can either specify track_changes: “TRUE” under the source in your migration YAML, or you can specify a high_water_property. The former will hash each result row and compare it to a previously computed hash. If they match, Migrate will skip that row. The latter, will examine the property you specify and compare it to the same property from the previous migration. If the incoming high water property is higher, then Migrate knows it should import the row. Typically, you might use something like a “changed” or “updated” timestamp on the incoming content as your high water property.

Both methods work fine, sometimes you just might be unable to use one or the other. For example, if there are no available properties on your source data to act as a high water mark, then the track_changes method is your only option. You may be unable to use the high_water_property if there are fields on your source data that might change over time (thereby changing the hash of the content) but you do not want to trigger an update when those fields change.

Cron

The final piece of the puzzle is actually ensuring that your migration runs on a regular basis. To do that, you’ll want to write a little bit of code to run your migration on cron.

I found this tutorial on cron and the Drupal 8 Queue API to be very informative and helpful. I would recommend it if you’d like to learn more about Drupal’s Queue API. Here, we’re just going to go over the minimum required effort to get a migration importing regularly on cron.

First, you’ll need to implement hook_cron in a custom module. Put the following in that function:

/** * Implements hook_cron(). * * Schedules a synchronization of my migration. */ function mymodule_importer_cron() { $queue = \Drupal::queue('mymodule_importer');   // We only ever need to be sure that we get the latest content once. Lining // up multiple sync's in a row would be unnecessary and would just be a // resource hog. We check the queue depth to prevent that. $queue_depth = (integer) $queue->numberOfItems(); if ($queue_depth === 0) { $queue->createItem(TRUE); } }

In the above, we’re loading a queue and adding an item to that queue. Below, we’ll implement a QueueWorker that will run your migration when there is an item in the queue. It’s possible that the migration might take longer than the amount of time you have between cron runs. In that case, items would start piling up and you would never empty the queue. Here, we just make sure we have one item in the queue. There’s no reason to let them pile up.

Next, in a file named like src/Plugin/QueueWorker/MyModuleImporterQueueWorker.php, we’ll write a class that extends QueueWorkerBase:

namespace Drupal\poynter_importer\Plugin\QueueWorker;   use Drupal\Core\Plugin\ContainerFactoryPluginInterface; use Drupal\Core\Queue\QueueWorkerBase; use Drupal\Component\Plugin\PluginManagerInterface; use Symfony\Component\DependencyInjection\ContainerInterface; use Drupal\migrate\MigrateExecutable; use Drupal\migrate\MigrateExecutableInterface; use Drupal\migrate\MigrateMessage;   /** * @QueueWorker( * id = "mymodule_importer", * title = @Translation("My Module Cron Importer"), * cron = { * "time" = 30, * }, * ) */ class MyModuleImporterQueueWorker extends QueueWorkerBase implements ContainerFactoryPluginInterface { … omitted ... }

Notice that the id value in the annotation matches the name we put in our implementation of hook_cron. That is important. The “time” value is the maximum time that this run of your worker is allowed to take. If it does not complete in time, it will be killed and the item will remain in the queue and will be executed on the next cron run.

Within the class, we’ll inject the the migration plugin manager…

public function __construct(PluginManagerInterface $migration_manager) { $this->migrationManager = $migration_manager; }   public static function create(ContainerInterface $container, array $configuration, $plugin_id, $plugin_definition) { return new static($container->get(‘plugin.manager.migration’)); }   public function processItem($item) { $migration = $this->migrateManager->createInstance('your_migration'); $message = new MigrateMessage(Content Imported'); $executable = new MigrateExecutable($migration, $message); $executable->import(); }

I’ve left out a lot of code standards for clarity above (don’t judge). The key things to notice are that ‘your_migration’ must match the id of the migration in your migration YAML file. The rest of the processItem() method is just a little limbo to get your migration to a point where you can call import() without an error.

With all this written, your migration will be run every time cron is executed.

Conclusion

It took a lot of research to get this working the first time, but we still saved a lot of time by using as much as we could from the Migrate module to implement third-party content synchronization. Once I wrote this initial implementation, I’ve been able to simply tweak the Iterator and machine names in order to implement synchronization with another API on another project. Getting everything set up and working took about a day and will probably take less time in the future.

You can see the work in action at Museum of Contemporary Art Denver – just check out their events page.

I hope you’ll let me know if you give this a try, and what you did and didn’t find helpful!

Stanford Environmental Health & Safety brandt Wed, 09/06/2017 - 22:36 Distilling thousands of pages of critical content into an easy-to-use interface

Thoughtful content strategy to put users first.

Highlights
  • Content migration and organization of thousands of pieces of content

  • User-focused visual interface

  • Drupal architecture driven by a comprehensive taxonomy

We want to make your project a success.

Let's Chat. Our Client

Stanford University pursues ground-breaking research in almost every field of human endeavor. Its 2,000 faculty members can found their own laboratory and pursue new directions in medical, scientific, engineering, and humanistic research. These experiments can include class-4 lasers, viruses, chemical, biochemical, and radiological hazards. Over $1B (yes, really) in annual research funding supports hundreds of research labs.

Stanford’s Environmental Health and Safety department (EH&S) is the principal entity responsible for not only compliance with the law, but ensuring that the Stanford community is safe — an extremely challenging mission given the extent of Stanford’s activities.

The Challenge

Despite conscious effort to avoid being viewed as an “enforcement” agency, EH&S is often viewed as an impediment between an individual and their chosen task. Unfortunately reinforcing this perception was their outdated site: it was disorganized, difficult to maintain and update, and not mobile friendly. The site hosts an incredibly large volume of content — everything from safety manuals, PDFs, and critical documentation — that was buried and hard to find. Thus the former online space for EH&S was often an additional barrier for those trying to conduct their work in a safe and compliant manner.

The business goals became the following:

  • Create a responsive web experience that solves problems for real users, and advances the goals of EH&S to realize a safe and healthy Stanford.
  • Revamp Information Architecture (IA) to make it simple to use, match users’ expectations for how safety information is organized, and easy for people to find what they needed.
  • Shift the perception and language of safety away from “compliance” and toward something more progressive and attractive, to persuade unmotivated users to participate more fully.
  • Eliminate bottlenecks for users who are already motivated to take positive action.
  • Allow for analytical data to be taken from the site to determine the level of success, and be able to adjust the site as needed in response to the data.
The Solution

Because of the sheer variety and volume of audiences, types of content, and types of tasks needed, the solution required a deep understanding of both the structure of the content as well as how that content was accessed. This was in addition to getting several thousand pieces of content wrangled out of PDFs and migrated to the new site, which required a thorough content audit.

Strategy work consisted of persona development to learn about the various audiences, and the Stanford and Palantir teams worked together on card sorting exercises to determine the best information architecture. The taxonomy was re-categorized to allow related content to surface easily; when a user went to a page about chemical safety, for example, they’d be shown related content under the right category, such as proper chemical disposal (under Services), a chemical storage form (Forms), and courses needed to handle that chemical (Training).

Users could self-select which role best suited them in order to view all the content related to their role.

In order to create the easiest user experience, we determined that the site should be broken down both by topic and by role. A series of icons was created to help delineate between the 20 different health and safety topics, and photography of real Stanford employees and students were used to demonstrate the 13 types of users determined via persona workshops. Quick links were provided to make it easy for users to do the most common tasks, and a faceted Solr search was implemented to help users locate forms, manuals, training documents, and standard operating procedures.

All of this was accomplished with a visual theme that worked within Stanford University’s overarching brand standards and was designed to allow for clarity and simplicity.

The Results

Stanford strives for excellence in all programs, and that should extend to safety as well. While safety content may not be the most exciting reading, it is critical that it is found quickly and is clearly presented in order to keep safety a priority.

Through content strategy and a supporting architecture, we were able to meet the product owner’s ultimate goal of “content on demand.” As he stated, “what I want, when I want, where I want!” The new site allows EH&S to present an image of a professional, knowledgeable, and helpful service provider that is fundamental to the unique experience of being at Stanford.

We want to make your project a success.

Let's Chat. Drupal ehs.stanford.edu/

Recently I had to generate term-specific aliases (aliases that are different from the default alias pattern set for Article entities). This is how to do it:

1. Enable the Pathauto module
2. Set the default URL alias pattern for your content type in order to fire the hook
3. Implement hook_pathauto_alias_alter() in your .module file.

Example module structure:

mymodule/ - mymodule.info.yml - mymodule.module - src/ - ArticlePathAlias.php

I like to keep .module clean and simple and because of that I store the main logic in src/ArticlePathAlias.php file.

The mymodule.info.yml this is just a regular .info file.

4. Add the following to your mymodule.module file:

The drought is finally over. After a long time with no events in which we participated, it is once again time to go around the world and share the knowledge. The two summer months (July and August) are practically always »spleepy«, so we are thrilled to announce that we will be seeing you tomorrow at DrupalCamp Antwerp. We promised that you will be informed as much as possible about where to find us besides in our office. We are keeping our promise once again. So, if you have a particular subject in mind and you like discussing things about Drupal, or anything really, say hello to us at the… READ MORE
Project Spotlight – J.D. Power Advanced Search Appnovation was recently tasked to redesign J.D. Power’s car and article search functionality. The current car search implementation was based on Drupal views searching against the database, while the article search was a basic Apachesolr (Drupal contrib module) search page. The car and article search functionality was to be based on A...

The first release candidate for the upcoming Drupal 8.4.0 release is now available for testing. Drupal 8.4.0 is expected to be released October 4.

Download Drupal-8.4.0-rc1

8.4.x includes new stable modules for storing date and time ranges, display form errors inline and manage workflows. New stable API modules for discovering layout definitions and media management are also included. The media API module is new in core, all other new stable modules were formerly experimental. The release also includes several important fixes for content revision data integrity, orphan file management and configuration data ordering among other things. You can read a detailed list of improvements in the announcements of alpha1 and beta1.

What does this mean to me? For Drupal 8 site owners

The final bugfix release of 8.3.x has been released. A final security release window for 8.3.x is scheduled for September 20, but 8.3.x will receive no further releases following 8.4.0, and sites should prepare to update from 8.3.x to 8.4.x in order to continue getting bug and security fixes. Use update.php to update your 8.3.x sites to the 8.4.x series, just as you would to update from (e.g.) 8.3.4 to 8.3.5. You can use this release candidate to test the update. (Always back up your data before updating sites, and do not test updates in production.)

For module and theme authors

Drupal 8.4.x is backwards-compatible with 8.3.x. However, it does include internal API changes and API changes to experimental modules, so some minor updates may be required. Review the change records for 8.4.x, and test modules and themes with the release candidate now.

For translators

Some text changes were made since Drupal 8.3.0. Localize.drupal.org automatically offers these new and modified strings for translation. Strings are frozen with the release candidate, so translators can now update translations.

For core developers

All outstanding issues filed against 8.3.x were automatically migrated to 8.4.x. Future bug reports should be targeted against the 8.4.x branch. 8.5.x will remain open for new development during the 8.4.x release candidate phase. For more information, see the release candidate phase announcement.

Your bug reports help make Drupal better!

Release candidates are a chance to identify bugs for the upcoming release, so help us by searching the issue queue for any bugs you find, and filing a new issue if your bug has not been reported yet.

I am excited to announce that the D8 port of the Autocomplete Deluxe module has been released in “beta”!
 

What does it do?

The Autocomplete Deluxe module provides a widget that enhances the default autocomplete field in Drupal. It uses jQuery UI autocomplete and provides a slick visual element for content editors to reference terms - displaying them inline, drag-n-drop reordering, and creation of new terms from the field itself. It works out-of-the-box and no 3rd party libraries are needed.
 

The topic was “Distributions” at the September Boston Drupal Meetup, which was held at Acquia HQ in downtown Boston, and attendees were treated to an unusually comprehensive session.

That’s because Drupal Project Lead Dries Buytaert kicked off the meeting by going waaay back, to the very first Drupal “distro.”

To back up a bit, a distribution is a combination of Drupal core + modules + configuration + documentation -- all bundled up and optimized for a particular purpose or group of users.

Tags: acquia drupal planet

In 2012 we wrote a blog post about why many of the biggest government websites were turning to Drupal. The fact is, an overwhelming number of government organizations from state and local branches to federal agencies have chosen to build their digital presence with Drupal, and government continues to adopt Drupal as the content management system of choice.

Matt & Mike are joined by Pantheon co-founder and CTO David Strauss, Pantheon co-founder and Head of Product Josh Koenig, as well as Lullabot's own performance expert Nate Lampton to talk everything performance. Topics include front-end performance, server-side PHP, CDNs, caching, and more.

The Maestro Workflow Engine for Drupal 8 is now available as a Beta download!  It has been many months of development to move Maestro out of the D7 environment to a more D8 integrated structure and we think the changes made will benefit both the end user and developer.  This post is the first of many on Maestro for D8, which will give an overview of the module and provide a starting point for those regardless of previous Maestro experience.

We've put together a Maestro overview video introducing you to Maestro for Drupal 8.  Maestro is a workflow engine that allows you to create and automate a sequence of tasks representing any business process. Our business workflow engine has existed in various forms since 2003 and through many years of refinements, it was released for Drupal 7 in 2010. 

If it can be flow-charted, then it can be automated

Now, with the significant updates for Drupal 8, maestro was has been rewritten to take advantage of the Drupal 8 core improvements and module development best practices. Maestro now provides a tighter integration with native views and entity support.

Maestro is a solution to automate business workflow which typically include the movement of documents or forms for editing and review/approval. A business process that would require conditional tests - ie: IF this Then that.

Pages