Drupal Feeds

As part of our series discussing the use of Drupal in non-profits (click here to subscribe via e-mail), we recently reached out to one of our favorite clients, WIEGO, who candidly shared some of their struggles and successes.

Since re-launching their site on Drupal almost 6 years ago, they've grown from a site with 50 static pages, to a searchable, categorized repository of news and knowledge spanning over 22,000 articles!

In this case study, we gain some insights into how organizations like WIEGO decided on Drupal, have lived with some of the growing-pains, and are planning to move forward into the future!

Read more to find out!

Over the past two years, I’ve had the opportunity to work with many different clients on their Drupal 8 site builds. Each of these clients had a large development team with significant amounts of custom code. After a recent launch, I went back and pulled together the common recommendations we made. Here they are!

1. Try to use fewer repositories and projects

With the advent of Composer for Drupal site building, it feels natural to have many small, individual repositories for each custom module and theme. It has the advantage of feeling familiar to the contrib workflow for Drupal modules, but there are significant costs to this model that only become obvious as code complexity grows.

The first cost is that at best, every bit of work requires two pull requests; one pull request in a custom module repository, and a second commit in the composer.lock in the site repository. It’s easy to forget about that second pull request, and in our case, it led to constant questioning by the QA team to see if a given ticket was ready to test or not.

A second cost is dealing with cross-repository dependencies. For example, in site implementations, it’s really common to do some work in a custom module and then to theme that work in a custom theme. Even if there’s only a master branch, there would still be three pull requests for this work—and they all have to be merged in the right order. With a single repository, you have a choice. A single pull request can be reviewed and merged, or multiple can be filed.

A third, and truly insidious cost is where separate repositories actually become co-dependent, and no one knows it. This can happen when modules are only tested in the context of a single site and database, and not as site-independent reusable modules. Is your QA team testing each project against a stock Drupal install as well as within your site? Are they mixing and matching different tags from each repository when testing? If not, it’s better to just have a single site repository.

2. Start with fewer branches, and add more as needed

Sometimes, it feels good to start a new project by creating all of the environment-specific branches you know you’ll need; develop, qa, staging, master, and so on. It’s important to ask yourself; is each branch being used? Do we have environments for all of these branches? If not, it’s totally OK to start with a single master branch. If you do have multiple git repositories, ask this question for each repository independently. Perhaps your site repository has several branches, while the new SSO module that you’re building for multiple sites sticks with just a master branch. Branches should have meaning. If they don’t, then they just confuse developers, QA, and stakeholders, leading to deployment mistakes. Delete them.

3. Avoid parallel projects

Once you do have multiple branches, it’s really important to ensure that branches are eventually merged “upstream.” With Composer, it’s possible to have different composer.json files in each branch, such as qa pointing to the develop in each custom module, and staging pointing to master. This causes all sorts of confusion because it effectively means you have two different software products—what QA and developers use, and what site users see. It also means that changes in the project scaffolding have to be done once in each branch. If you forget to do that, it’s nothing but pain trying to figure it out! Instead, use environment branches to represent the state of another branch at a given time, and then tag those branches for production releases. That way, you know that tag 1.3.2 is identical to some build on your develop branch (even if the hash isn’t identical due to merge commits).

4. Treat merge conflicts as an opportunity

I’ve heard from multiple developers that the real reason for individual repositories for custom modules is to “reduce merge conflicts.” Let’s think about the effect multiple repositories have on a typical Drupal site.

I like to think about merge conflicts in three types. First, there’s the traditional merge conflict, such as when git refuses to merge a branch automatically. Two lines of code have been changed independently, and a developer needs to resolve them. Second, there are logical merge conflicts. These don’t cause a merge conflict that version control can detect but do represent a conflict in code. For example, two developers might add the same method name to a class but in different text locations in the class. Git will happily merge these together, but the result is invalid PHP code. Finally, there are functional merge conflicts. This is where the PHP code is valid, but there is a regression or unexpected ~~behaviour~~ behavior in related code.

Split repositories don’t have much of an effect on traditional merge conflicts. I’ve found that split repositories make logical conflicts a little harder to manage. Typically, this happens when a base class or array is modified and the developer misses all of the places to update code. However, split repositories make functional conflicts drastically more difficult to handle. Since developers are working in individual repositories, they may not always realize that they are working at cross-purposes. And, when there are dependencies between projects, it requires careful merging to make sure everything is merged in the right order.

If developers are working in the same repository, and discover a merge conflict, it’s not a blocker. It’s a chance to make a friend! By discussing the conflict, it gives developers the chance to make sure they are solving the right problem, the right way. If conflicts are really complex, it’s an opportunity to either refactor the code or to raise the issue to the rest of the team. There’s nothing more exciting than realizing that a merge conflict revealed conflicting requirements.

5. Setup config management early

I’ve seen several Drupal 8 teams delay in setting up a deployment workflow that integrates with Drupal 8’s configuration management. Instead, deployments involve pushing code and manual UI work, clicking changes together. Then, developers pull down the production database to keep up to date.

Unfortunately, manual configuration is prone to error. All it takes is one mistake, and valuable QA time is wasted. Also, it avoids code review of configuration, which is actually possible and enjoyable with Drupal 8’s YAML configuration exports.

The nice thing about configuration management tooling is it typically doesn’t have any dependency on your actual site requirements. This includes:

  • Making sure each environment pulls in updated configs on deployment
  • Aborting deployments and rolling back if config imports fail
  • Getting the development team comfortable with config basics
  • Setting up the secure use of API keys through environment variables and settings.php.

Doing these things early will pay off tenfold during development.

6. Secure sites early

I recently worked on a site that was only a few weeks away from the production launch. The work was far enough along that the site was available outside of the corporate VPN under a “beta” subdomain. Much to my surprise, the site wasn’t under HTTPS at all. As well, the Drupal admin password was the name of the site!

These weren’t things that the team had forgotten about; but, in the rush of the last few sprints, it was clear the two issues weren’t going to be fixed until a few days before launch. HTTPS setup, in particular, is a great example of an early setup task. Even if you aren’t on your production infrastructure, set up SSL certificates anyway. Treat any new environments without SSL as a launch blocker. Consider using Let's Encrypt if getting proper certificates is a long task.

This phase is also a good chance to make sure admin and editorial accounts are secure. We recommend that the admin account password is set to a long random string—and then, don’t save or record the password. This eliminates password sharing and encourages editors to use their own separate accounts. Site admins and ops can instead use ssh and drush user-login to generate one-time login links as needed.

7. Make downsyncs normal

Copying databases and file systems between environments can be a real pain, especially if your organization uses a custom Docker-based infrastructure. rsync doesn’t work well (because most Docker containers don’t run ssh), and there may be additional networking restrictions that block the usual sql-sync commands.

This leads many dev teams to really hold off on pulling down content to lower environments because it’s such a pain to do. This workflow really throws QA and developers for a loop, because they aren’t testing and working against what production actually is. Even if it has to be entirely custom, it’s worth automating these steps for your environments. Ideally, it should be a one-button click to copy the database and files from one environment to a lower environment. Doing this early will improve your sprint velocity and give your team the confidence they need in the final weeks before launch.

8. Validate deployments

When deploying new code to an environment, it’s important to fail builds if something goes wrong. In a typical Drupal site, you could have errors during:

  • composer install
  • drush updatedb
  • drush config-import
  • The deployment could work, but the site could be broken and returning HTTP 500 error codes

Each deployment should capture the deployment logs and store them. If any step fails, subsequent steps should be aborted, and the site rolled back to its previous state. Speaking of…

9. Automate backups and reverts

When a deployment fails, it should be nearly automatic to revert the site to the pre-deployment state. Since Drupal updates involve touching the database and the file system, those should both be reverted. Database restores tend to be fairly straight forward, though filesystem restores can be more complex if they are stored on S3 or some other service. If you’re hosted on AWS or a similar platform, use their APIs and utilities to manage backups and restores where possible. They have internal access to their systems, making backups much more efficient. As a side benefit, this helps make downsyncs more robust, as they can be treated as a restore of a production backup instead of a direct copy.

10. Remember #cache

Ok, I suppose I mean “remember caching everywhere,” though in D8 it seems like render cache dependencies are what’s most commonly forgotten. It’s so easy to fall into Drupal 7 patterns, and just create render arrays as we always have. After all, on locals, everything works fine! But, forgetting to use addCacheableDependencies on render arrays leads to confusing bugs down the line.

Along the same lines, it’s important to set up invalidation caching early in the infrastructure process. Otherwise, odds are you’ll get to the production launch and be forced to rely on TTL caches simply because the site wasn’t built or tested for invalidation caching. It’s a good practice when setting up a reverse proxy to let Drupal maintain the caching rules, instead of creating them in the proxy itself. In other words, respect Cache-Control and friends from upstream systems, and only override them in very specific cases.

Finally, be sure to test on locals with caches enabled. Sure, disable them while writing code, but after turn them back on and check again. I find incognito or private browsing windows invaluable here, as they let you test as an anonymous user at the same time as being logged in. For example, did you just add a config form that changes how the page is displayed? Flip a setting, reload the page as anonymous, and make sure the update is instant. If you have to do a drush cache-rebuild for it to work, you know you’ve forgotten #cache somewhere.

What commandments did I miss in this list? Post below and let me know!

Header image from Control room of a power plant.

Lots of stuff has been changing in Drupal 8 recently. In 8.3.0, a new experimental "layout discovery" module was added to core, which conflicted with the contrib "layout plugin" module. Now in 8.3.3, the two-column and three-column layouts had their region names changed, which hid any content dropped into those regions when those layouts were used.

In the past week, we've seen a couple issues upgrading a site from 8.2.x to 8.3.2, and now another issue with 8.3.2 to 8.3.3 that seem worth a quick mention.

DrupalDrupal 8Drupal PlanetUpdatesVisual Regression Testing

Agaric is grateful to the Drupal community for all the effort poured into the amazing collaborative project. As part of giving back to it, we go to conferences to share with others what we have learned. These are some events where Agaric will be presenting this month.

Eastern Conference for Workplace Democracy

This is a convergence of worker-owned cooperatives. Representatives come from all over the country to attend workshops and sessions on all things related to owning a cooperative. It will be help in New York City on weekend of June 9th -11th at the John Jay College of Criminal Justice.

Benjamin and Micky will be hosting a workshop/discussion with Danny Spitzberg on Drutopia. They will cover how it can help cooperatives and smaller businesses have a we presence above and beyond the costs they can afford by consolidating the hosting and feature development into a group effort.

Montreal Drupal Camp

This event will take place on June 15-18 at John Molson School of Business de l'Université Concordia. Benjamin will be speaking on how Software as a Service can lead to long-term success in a software project.

Twin Cities Drupal Camp in Minneapolis

At Twin Cities Agaric will be presenting one workshop and two sessions.

On Thursday, June 22, Benjamin and Mauricio will be presenting the Getting Started with Drupal workshop. It is aimed to people who are just starting with Drupal and want to have a birds eye view of how the system works. As part of the workshop attendees will have the chance to create a simple yet functional website to put in practice their new knowledge. The organizers have gone above and beyond to make this training FREE for everyone! You do not even need a camp ticket to participate. You just need to register.

On Saturday, June 24, Mauricio will be presenting on Drupal 8 Twig recipes. This will be an overview of the theme system in Drupal 8 and will include practical example of modifying the default markup to your needs. The same day, Benjamin will present his Software as a Service.

Design4Drupal

This is THE yearly Camp for Drupal doers in Boston and it happens June 22nd-23rd. Micky will be hosting a workshop/discussion on Drutopia, an initiative within the Drupal project based in social justice values and focused on building collectively owned online tools. Current focuses include two Drupal distributions aimed at grassroots groups also offered as software as a service, ensuring that the latest technology is accessible to low-resourced communities.
Agaric will have a busy month attending and speaking at conferences. Please come to say hi and have fun with us.

There is one module that makes designing for Drupal 7 much, much easier: Theme Developer.

You can think of Theme Developer as a Drupal-specific version of Firebug or Chrome Developer Tools. Using Theme developer you can click on any element of your Drupal site and get a breakdown of how it was built.

Theme Developer has some downsides: it's not been updated in a while, and (like anything related to the Devel module) shouldn't be used on live sites. But, it can still be a useful tool for Drupal 7 themers.

  • In the bottom-left corner of the screen, you will see a small "Themer Info" area:

  • Check this box:
  • Up in the top-right corner of the site you'll see a larger black box:

  • The bar does a pretty good job of explaining what to do! Just like Firebug, or Chrome Dev Tools, you can inspect areas of your Drupal site.
  • Here's what happens when you click on a page element: you'll see a red box around that particular element.
  • The theme developer box will now show information about your chosen page element:

Here are some of the details you'll see:

  • Template called: the name of the file which is controlling the layout of this element
  • File used: the location of the file controlling the layout
  • Candidate template files: if you'd like to create an override for this part of the page, these are suggested file names.
  • Preprocess functions: These functions connect what happens in the module code to what gets sent to the theme

If you want to use the candidate template files, easiest thing to do is copy the "Template called" file, rename it and save it in your template folder. This is what the files mentioned in this example would do:

  • block-user-1.tpl.php ... if you create this file, it will only provide a template for this particular block
  • block-user.tpl.php ... if you create this file, it will only provide a template for this user blocks
  • block-left.tpl.php ... if you create this file, it will only provide a template for blocks in the left div.
  • block.tpl.php ...if you create this file, it will provide a template for all blocks

This video offers a great explanation of the Theme Developer module:

{wistia}d1i8fayk3x{/wistia}

We built a platform to run Compose remotely and easily, so that you don't have to know how to use it. It's called Composy.io.

There is no tools or modules available for Drupal project management in drupal.org. It's all about your men and how you manage them. A few weeks back my colleague wrote about ‘How to set the right expectations for project delivery?’. 
 
I am a Drupal Project Manager and in this blog, I have written about the ways I manage a Drupal Project.
 
Let’s list down the core objectives of any project manager
 
A Drupal project is not something out of the world but a project which has focused Drupal development and maintenance tasks.
 
The main objectives are:

  • To Stay On Budget
  • Finish On-Time…

As project managers (PMs), we are often asked to deliver on Key Results Areas (KRAs), to put up “our best show.”

Unfortunately most of us think that as a project manager, our only task is on-time quality delivery within a stipulated budget.

However, in this rat race, we tend to forget what makes us different from rest: the soft skills that, if honed properly, enable us to manage our users, sponsors, and all our stakeholders. Then our job is done.

Tags: acquia drupal planet
6 Things to Consider During a Nonprofit Web Design Project 6 Things to Consider During a Nonprofit Web Design Project Jill Farley Thu, 06/08/2017 - 11:45

Every day, cultural institutions face unique challenges while working towards a mission with a small technical staff and digital budget. Limited money, revolving staff, and regulatory pressures require visionaries to think ahead of their competition to build a digital presence that doesn’t tie their hands with expensive proprietary licenses and high maintenance code. 

So often, cultural nonprofits feel pain triggered by the decision of another department. When technology solutions like ticketing, donor and membership management, point-of-sale, email marketing, and content management are selected without cross department communication, they won’t integrate. This causes struggles big and small like: 

  • Extracting or inputting data 
  • Battling with vendors to get even the most innocuous tracking code installed
  • Making the public-facing user experience feel seamless
  • Customizing the look and feel of simple things, like forms and checkout screens
  • Keeping content (like event descriptions) consistent across systems

After more than 13 years working with nonprofit and cultural institutions, like the Denver Botanic Gardens, we’ve seen that these problems are epidemic. Drawing from experience, we have a few ideas about why that could be, and how Drupal can help. 

Mind Shift: Expense vs. Investment

The big problem is that most nonprofit web design projects are considered a one-time expense, instead of a long term investment. 

Expenses are a one-time cost with a start and end-date. Once a purchase has been made, it is scrutinized as an operating cost by the board and the finance committee, often dubbed a ‘necessary evil.’

Investments require long term, strategic thinking. They receive ongoing budget priority and dedicated resources. They, like an employee, are expected to make money and be accountable.

When a for-profit company spends money on the development of a new product or venture, they bank their business on it. They set goals and expect it will eventually enjoy returns that will help the company grow. 

Treating technology spend as an investment rather than an expense can position a nonprofit to be more strategic about its vendor selection, increase direct revenues from a nonprofit website design and generate longer term buy-in from leadership.

How can nonprofits make the shift?

1. Invest in open source. Open source software differs from platforms provided by Microsoft, Adobe, etc. in that it doesn't cost anything to license and use. It also means you can pick up your site and take it to any vendor. Because it's open source, Drupal is updated and maintained by millions of developers (a lot like Wikipedia). This means that when a new social media platform becomes popular for example, the community can create an integration within a matter of days or even hours. 

2. Make integration-focused software a priority. Own the technology. Don’t let the technology roadmap be dictated by whether another company thinks a feature is important. Pick vendors by their commitment to playing nice with other tools, not by how many out-of-the-box features they have and always, always make APIs a priority. Drupal can connect to almost anything. Other less custom platforms have a hard time integrating with third party software. Drupal can integrate with almost any platform, regardless of how old or specific. Drupal works well with things like Salesforce, Hubspot, Marketo, and countless many more.

3. Learn how well Drupal works for nonprofits. It is a scalable content management and system integration platform of choice. Trusted by institutions like GreenPeace, LACMA, The Red Cross, and The Whitehouse, Drupal offers the ability to integrate with enterprise solutions like Blackbaud/Convio, Magento and other commerce platforms, ticketing systems like Galaxy, Tessitura and more that haven’t been invented yet. Integrations, scalability, and speed to market are all things to be kept in mind when selecting digital tools.

4. Think in terms of conversions. Measure. Technology tools should save and make money, directly or indirectly. Have higher expectations of a ticketing system, a content management system, or a volunteer management system. Figure out how things that are valuable and can be tracked like “conversions”. Assign value to non-monetary outcomes so gain and ROI can be calculated. 

For example: 

A volunteer may not be a revenue line, but recruiting someone takes valuable staff time. Calculate how the website can do some of that work for you. 

  • Volunteer value - $50 each 
  • New dynamic volunteer signup form - $1,200 
  • Result? 30 more recruits than usual 
  • ROI: ((30 x $50) - $1200) / $1200 = 25%

25% return? Not bad.

Managing and reconciling event information across all website platforms can be cumbersome and require tons of time by a content manager. 

  • Staff cost - $40/hour
  • Manual ticketing effort for event - 120 hours
  • Calendar API integration - $2000
  • Automated ticketing effort for event - 40 hours
  • Annual Savings: ((120 x $40) - ((40 x $40)+$2000)) = $1,200

$1,200 savings every year? Nice.

5. Keep your staff happy. Drupal is built to make sense to users of any technical skill level, and the admin interface can be optimized for any type of workflow. The interface can even be customized to look like other systems that users may be more familiar with. Content edits can be made easily, and Drupal can be configured to allow for revisions and approval from multiple content editors with various permission levels. 

6. Don’t forget hosting. During a nonprofit web design project and throughout the life of a website, is it important to have the support of a reputable hosting company. A Drupal specific hosting company, like Acquia, offers the most comprehensive bundle of support and integrated hosting services, which as an long term investment can save thousands. For a nonprofit, reliable maintenance and security is unmatched. 


Drupal is a long term investment because it can be scaled as a nonprofit institution grows. It can save time, money, and hassle, especially when paired with a top-notch hosting platform, like Acquia. In our tenure working with nonprofits like the Denver Botanic Gardens, The NFPA, and the Colorado General Assembly, we’ve solved many problems using Drupal. If your nonprofit or cultural institution could use an overhaul, contact us
 

How do we grow a sustainable Drupal business in an increasingly competitive marketplace and how can we innovate and diversify to stay ahead? Help us address those questions at Drupalcon Vienna 2017 with your sessions on Business track. Drupalcon is the biggest Drupal event where people meet and discuss Drupal as technology and community. This year European Drupalcon will take place in Vienna, Austria.  Drupalcon targets to provide something for everyone, regardless if you are a developer, designer or entrepreneur. To provide this mix of content, we organized it into 11 tracks: Being Human… READ MORE
A high-level overview of the Plugin system in Drupal 8 spencer Fri, 06/09/2017 - 03:20

I am pleased to announce that Lee Rowlands has accepted our invitation to be a Drupal 8 provisional framework manager.

Lee is based in Australia and has been heavily involved with the Drupal community both at home and internationally. His involvement with core and his contributions to a huge variety of projects on Drupal.org is impressive. You can read more about his contributions at his Community Spotlight. A quote:

"As a contributor you are incredibly lucky to have your work constructively reviewed by some of the world's best programmers. Every time someone makes a suggestion on your patch, you learn a little more. I've learnt so many programming concepts from reviewing other's code and having my code reviewed by others."

For years, Lee has been stepping up to do what's most needed for Drupal. For example, when Forum module was potentially at risk of being removed from core, Lee stepped up to adopt it in response. He's also very active on the Drupal security team to ensure fixes go out in a timely manner. Lee cares both about the maintainability of Drupal itself and the concerns and experiences of other Drupal contributors.

Lee builds sites for some of the Australia's largest government, education, media and non-profit organizations. He has spoken regularly at events in Australia/New Zealand since 2010. In addition, Lee is a long-time mentor to others, and has inspired many people all over the world to be a part of the Drupal community.

Please join me in welcoming Lee.

Acquia Dev Desktop - a fastlane to Drupal 8 development leanderAdmin Fri, 06/09/2017 - 19:57

Most of the information I have come across about migrating from Drupal 6 to Drupal 8 is about migrating content, however before tackling this problem another one must be solved, maybe it is obvious and hence understated, so let's spell it out loud: preserving the site functionality. That means checking if the contrib modules need to be ported to Drupal 8, and also checking if the solution used in the previous version of the site can be replaced with a completely different approach in Drupal 8.

Let's take ao2.it as a study case.

When I set up ao2.it back in 2009 I was new to Drupal, I choose it mainly to have a peek at the state of Open Source web platforms.

Bottom line, I ended up using many quick and dirty hacks just to get the blog up and running: local core patches, theme hacks to solve functional problems, and so on.

Moving to Drupal 8 is an opportunity to do things properly and finally pay some technical debt.

For a moment I had even thought about moving away from Drupal completely and use a solution more suited to my usual technical taste (I have a background in C libraries and linux kernel programming) like having the content in git and generate static web pages, but once again I didn't want to miss out on what web frameworks are up to these days, so here I am again getting my hands dirty with this little over-engineered personal Drupal blog, hoping that this time I can at least make it a reproducible little over-engineered personal Drupal blog.

In this series of blog posts I'll try to explain the choices I made when I set up the Drupal 6 blog and how I am re-evaluating them for the migration to Drupal 8.

The front page view

ao2.it was also an experiment about a multi-language blog, but I never intended to translate every content, so it was always a place where some articles would be in English, some in Italian, and the general pages would be actually multi-language.

This posed a problem about what to show on the front page:

  • If every node was shown, there would be duplicates for translated nodes, which can be confusing.
  • If only nodes in the current interface language were shown, the front page would list completely different content across languages, which does not represent the timeline of the blog content.

So a criterion for a front page of a partially multi-lingual site could be something like the following:

  • If a node has a translation in the current interface language, show that;
  • if not, show the original translation.
The “Select translation” module

In Drupal 6 I used the Select translation module which worked fine, but It was not available for Drupal 8.

So I asked the maintainers if they could give me the permission to commit changes to the git repository and I started working on the port myself.

The major problem I had to deal with was that Drupal 6 approached the multi-language problem using by default the mechanism called "Content translations" where separate nodes represented different translations (i.e. different rows in the node table each with its own nid), tied together by a tid field (translation id): different nodes with the same tid are translations of the same content.

Drupal 8 instead works with "Entity translations", so one single node represents all of its translations and is listed only once in the node table, and actual translations are handled at the entity field level in the node_filed_data table.

So the SQL query in Select translation needed to be adjusted to work on the node_filed_data rather than of the node table, as it can be seen in commit 12f70c9bb37c.

While at it I also took the chance to refactor and clean up the code, adding a drush command to test the functionality from the command line.

The code looks better structured thanks to the Plugin infrastructure and now I trust it a little more.

Preserve language

On ao2.it I also played with the conceptual difference between the “Interface language” and the “Content language” but Drupal 6 did not have a clean mechanism to differentiate between the two.

So I used the Preserve language module to be able to only switch the interface language when the language prefix in the URL changed.

It turns out that an external module is not needed anymore for that because in Drupal 8 there can be separate language switchers, one for the interface language and one for the content language.

However there are still some issues about the interaction between them, like reported in Issue #2864055: LanguageNegotiationContentEntity: don't break interface language switcher links, feel free to take a look and comment on possible solutions.

More details about the content language selection in a future blog post.

Over the past 6 years, we've training hundreds of people through our 12-week Drupal Career Online class, our new 6-week Mastering Professional Drupal Developer Workflows with Pantheon class, as well as our dozens of public and private trainings (literally) around the world. As part of our long-form 12- and 6-week classes, we've been providing on-going support for our graduates in the form of DrupalEasy Office Hours.

Each week, we set aside two hours for any current student or graduate of any of our long-form classes to join our online classroom to ask just about any Drupal-related question they have. It might be about a project they're working on, something they learned in the course, or advice on how to tackle something that is a bit outside of their comfort zone. Regularly using screen-sharing, we can almost always help the person with their request - and most of the time, those watching pick up a thing or two as well.

The most rewarding aspect of DrupalEasy Office Hours (for us, at least) is watching students helping students. As Robert A. Heinlein once said, "when one teaches, two learn" is something that we try to encourage in all of our classes as well as DrupalEasy Office Hours.

This type of learning community has been a hallmark of what DrupalEasy training, consulting, and project coaching is all about. By engaging a subset of the larger Drupal community, our students gain experience, knowledge, and most of all - the confidence to ask fellow community members for help in an environment that is supportive and nurturing. 

Over the past few years, we've heard of similar programs by various Drupal shops who provide a similar service for their clients. We can't think of a better way to provide on-going goodwill and mentoring.

If you're a graduate of one of our long-form classes, be sure to pop-in and say hello (contact us for details). 

Ever wondered how Drupal 8 authenticates a user? Let's do a deep dive and find out.

In this journey, we will encounter a few new concepts which I'll try and explain briefly here and in detail in separate blog posts. Many of these concepts are borrowed from Symfony and adopted in Drupal 8. The journey of a request begins in a Symfony component called HTTP kernel. The job of HTTP kernel is to handle requests and respond to them in an event driven way.

Originally posted on LinkedIN.

The Government of Canada’s Web Renewal Initiative has failed. It may not be public yet, but there really is no way to redeem this half-conceived initiative to centralize all government pages onto a single website - Canada.ca.

This goal was lifted from the UK Government’s Government Digital Services (GDS). The goal of the GDS team was no less than digital transformation. Our government appears to have mistaken the alpha.gov.uk site as the end goal, rather than a platform with which to experiment with new ideas in government usability. The GDS is continuing to innovate to better serve the needs of their citizens, and having an open strategy allows for them to have their ideas validated by the world.

The Web Renewal Initiative (mega-migration to Canada.ca) was started by the Conservative government who was obsessed with centralizing communications & outsourcing as much as possible to the private sector.

Centralizing on Canada.ca was a Bad Idea

Serving all public Government of Canada content via a single site guarantees that this project will not be able to Fail Forward and learn through constant iterations. If governments are going to learn and grow with their IT projects they need to be structured so that public servants are able to take on small risks. Building the “one site to rule them all” will ultimately leave everyone focused on limitations of the tool rather than the needs of the user.

There is not a single user for government sites. There is no way to appeal to the scientists, students, seniors, travellers and businesses owners, just to name a few, through a single voice. You do need a single Canada.ca site to be able to effectively answer most questions of citizens, but also need to be able to direct them to a more detailed department site if they want more information.

Many departments also have websites or web apps that they have built for specific purposes. Most government sites aren’t as active as weather.gc.ca and won’t need their content to be updated 100s of times an hour. People go there for one specific reason (to get a permit, to find out if a drug is approved, to find the address of our High Commission in DC), and Canadians depend on this service. There are countless other examples where an agency might choose to set up a new website to try to target an audience or need which their departmental site cannot satisfy.

This project went off the rails before the RFP was even awarded. The very first item in the UK Government Digital Service’ Design Principles is to Start with user needs. Although there are great Usability folks who have been involved, there hasn’t been a mandate of “service transformation”, to really put users first. The rushed mandate of Canada.ca started with a bunch of assumptions and hasn’t brought on the user researchers or data analytics people to understand how to better meet user needs, let alone talk to users. The best hope with Web Renewal would be that it could save money, it was not designed to improve service.

It is worth mentioning that this initiative is built on proprietary software and managed completely by American-based international corporations. This approach does not support the broader public policy of a modern, open by default government that is supporting Canadian innovation. The process of centralizing and outsourcing Government IT makes it inevitable that multinational corporations are going to win contracts. Most Small & Medium-sized Enterprises (SME) just don’t have the resources to bid on multi-million dollar contracts let alone win them. When leveraging open-source, large projects can be broken down into smaller ones that will allow more Canadian companies to become involved.

Whether it is a giant multi-national or a small business, it is never a good idea for government to give a monopoly to a private sector company, like they did for Canada.ca. The vendor lock-in that comes with proprietary software makes it even worse as any transition away will include both migrating to a new technology stack as well as finding a new company to provide support.

I’ve previously highlighted the many problems with the implementation of Canada.ca. It is now time for everyone to admit that Web Renewal has failed. But if we do that, what should it be replaced with? What can be learned from this experiment and pulled forward into a plan that to help build the innovative modern government that Trudeau has promised Canadians?

I don’t think anyone is calling for a return to how government developed websites before Web Renewal. There does need to be more structure. There were too many orphaned projects that lacked proper accessibility, security & branding. What is the alternative?

10) Make things open: it makes things better

This is the final item in the UK GDS Design Principles. Last but not least, particularly since it frames the Open Government approach that is framing this discussion around the world. Building in the open has a great many advantages which have been articulated very clearly by government leaders in the UK, USA, Australia, France, Spain, and indeed most of the G20.

“Open source software can support the Digital Government Strategy's "Shared Platform" approach, which enables Federal employees to work together-both within and across agencies to reduce costs, streamline development, apply uniform standards, and ensure consistency in creating and delivering information.” - U.S. Department of Health & Human Services Website

At the 2016 Open Government Partnership meeting in Paris the importance of Open Source was acknowledged by governments around the world, including the Government of Canada.

So start with an open platform. The tool doesn’t particularly matter, but the approach absolutely does. There are almost no acceptable reasons why the government should ever build software from scratch. Governments need to find existing software communities and become engaged with them.

  • Review open-source software in use by our closest allies (USA, Australia, New Zealand, the EU & it’s members countries)
  • Experiment with public repositories other governments have shared
  • Adopt several that meet Canada’s unique needs in specific domains
Adopt an “Open” IT Workplace 

With the rate of change in IT, just to keep up, organizations need to be constantly investing in their workforce to ensure that they have the modern skills required. Working in the open makes developers more careful with their code. If your work is going to be published, you want to make sure that it is well written, documented and not introducing embarrassing bugs. Having a good reputation is increasingly important in the internet age. Working in the open also allows governments to have their work verified by external developers (for free).

“By making our code open and reusable we increase collaboration across teams, helping make departments more joined up, and can work together to reduce duplication of effort and make commonly used code more robust.” - Anna Shipman, Open Source Lead UK GDS

To increase the collaboration outside of government it is always useful to release code under a commonly used license (such as the GPL, MIT or Apache) which aid with the distribution. The Open Government Licence adopted by Canada might become well understood in Canada, but not internationally. The US government defaults to Public Domain, which is very pervasive and also well understood.

Prepare for Linguistic Diversity

The ability to fully manage bilingual content is difficult for many sites. The Government of Canada also needs to be able to support languages of First Nations, Inuit, Métis and New Canadians. Any Content Management System (CMS) chosen should be able to support, at a minimum, the orthographies of Ojibwe and Inuktitut in addition to languages like Arabic & Chinese which is the first languages for many Canadians. There are several open-source solutions that can already address our complex linguistic requirements.

With a commitment to open-source one could also build in decentralized readability evaluator to ensure that the content author knows how complex their work is (in real time) and that departments can assess a cross-site picture of their content. Writing in Plain Language isn’t something that comes naturally, but it is an important part of any accessibility or usability goals. There are well established open-source tools that already allow for multiple ways to evaluate language complexity, it is simply a matter of ensuring that it is built into the new websites that are used for creating the content.

Commit to Adopting Open Standards

When the Government of Canada formally gives up it’s goal to implement one site for the entire public service, We need to see a real commitment to Open Standards. Software interoperability allows the government to move the discussion away from specific tools and to broader cross-departmental needs. The UN’s International Telecommunication Union (ITU) defines them this way:

“‘Open Standards’ are standards made available to the general public and are developed (or approved) and maintained via a collaborative and consensus driven process. ‘Open Standards’ facilitate interoperability and data exchange among different products or services and are intended for widespread adoption.”

The World Wide Web Consortium (W3C) is such a body, and has ongoing committees that work to improve standards like HTML, Web Accessibility Initiative – Accessible Rich Internet Applications (WAI-ARIA) as well as Web Content Accessibility Guidelines (WCAG) 2.0 & Authoring Tool Accessibility Guidelines (ATAG) 2.0. Some of these are used to base government initiatives like the Web Experience Toolkit, as well as the Common Look and Feel before that.

An important W3C standard for this discussion are the Semantic Web Standards most fundamentally the Resource Description Framework (RDF). One could also look at a machine readable markup language like the W3C’s eXtensible Markup Language (XML) or even cutting edge features like Web Components. The important thing is that there is a set of agreed to standards with which government websites can effectively exchange information with each other.

A Coordinated Decentralized Approach

I don’t know of a government that has fully embraced the Semantic Web, but the technology is already well established. Adopting this set of standards would allow for the realization of much deeper content sharing between networked sites. With a cohesive implementation you can divide the roles of content generation and content curation.

In Part 2 of this article I will elaborate on how this approach could be leveraged within the Government of Canada.

Part 2: Implement a Federated Architecture

The Government of Canada may require 1000+ websites to effectively engage with all of the various people, organizations and other government agencies stakeholders. Maybe it is as few as 100, but it doesn’t make any sense to select an arbitrary number here. We will only know how many sites the Government of Canada needs when we understand the users better. The GDS’s first principle, Start With User Needs, is key. We know that there are going to be more than a handful and that there will inevitably be overlapping content.

Certain departments must have authority over some content and that this content should be distributed across government so that it is timely and accurate. This was one of the problems that Web Renewal was attempting to resolve by centralizing everything.

With a commitment to Open Standards it is possible to build a federated approach to content so that this can be accomplished. Any modern CMS ca expose content in a machine readable format (to everyone) so that it is open by default. It can then be consumed (either by people or machines) so that it can be easily syndicated within another sites domain.

Some Practical Examples

Health Canada should be the authority on all information related to health. We can identify places where health information should be included in:

  • Global Affairs Canada to help assist travelers
  • Immigration and Citizenship in the application process
  • GCTools for the public sector employees
  • Weather.gc.ca might be useful for seasonal warnings
  • Canada.ca the central government hub

Health Canada would be responsible for generating the content, and other government sites would simply be responsible for curating it. For the next SARS or bird flu-like scare Canadians need a central means to manage and update health information, but that can be automated through a federated architecture.

Similarly it would be useful to be able to use government sites to alert people if there are weather warnings in their area. Obviously you only want to include location specific warnings on government sites, when you have confidence about the location of the user. However, it would be possible through a federated distributed network to be able to share this information so as to protect Canadians.

Some Advantages of a Federated Approach

The current configuration of Canada.ca presents a number of security challenges, that can be overcome with a federated approach. You could set up a workflow of content between internal departmental sites that are inaccessible to vendors, contractors and non-authorized personnel until it is published to external public facing sites where content is exposed to the public after it has cleared the appropriate approvals.

Having multiple sites in multiple environments will make it much more robust, Web Renewal has created a single point of failure (as well as a huge bottleneck for content). Working with open-source communities that have a critical mass of users will also ensure that your infrastructure is not relying on “security by obscurity”.

The site that generates the content doesn’t need to be the site which displays the content. It makes sense that it would in most instances, but perhaps not all. The point of a central site though is to curate information to help see that users are able to get the information that they need as quickly as possible. The central site should not be where most content is generated.

The Government of Canada is attempting to modernize. The new Experimentation Direction for Deputy Heads, has a lot of potential but is severely restricted by Web Renewal. Being able to provide a sandboxed version of Canada.ca for people to experiment with would be a game changer for people wanting to innovate. Providing a simple framework for A/B testing is key if we are to know how to best to interact with Canadians.

If Canada.ca becomes just a light framework that collates information from other government sites, there is no reason that this couldn’t be distributed. A central agency could experiment with several versions each of which could independently build up-to-date information from live departmental sites. With a proper cloud-based environment it would be trivial to spin up a new variation, direct a percentage of the traffic to the new instance of the site and evaluate what impacts a change has on user’s behavior.

The Fate of Canada.ca

Obviously we still need a central website for citizens to engage with citizens. Like Ontario.ca, there needs to be a good starting point for everyone looking for government services. Citizens who don’t know where to go need a starting point. But frankly it doesn’t even necessarily need to be a CMS as one could use a static site generator to generate static web pages that are secure & robust, much like GitHub Pages does.

Ideally it would be great to have personalization in this central site to help guide people to the resource that they need, but there are many ways that it could simply aggregate information from federated departmental authorities and display it as part of Canada.ca.

Obviously search will be key with this. However, once all of the departmental information is in a machine readable format it will much easier to provide one or more search options which may be better suited for different needs. Many users are already going to start at Google.ca, so simply embedding a Google Search into the government doesn’t necessarily give Canadians a better experience.

Integrating with other Levels of Government

Once you have Government of Canada departments onboard, you it will be also possible to integrate with other government agencies. Citizens don’t really care what level of government is responsible for their problem, they just want the problem to go away. But using an open, federated architecture provincial and municipal departments can both include information from the Government of Canada in their sites (in real-time with no manual intervention) and share their data (which could be aggregated as needed).

If everything developed by the Government of Canada is developed with an Open by Default approach and shared back to the public, then it will be easier for other organizations to engage with government as a platform for innovation. We will see the solutions spearheaded by government (like the Web Experience Toolkit) used and extended by other organizations. We will find it easier and more cost effective to implement secure, accessible, bilingual solutions which can be adopted by Canadian organizations for their own needs.

Long Live Canada.ca

There is a path forward. Let’s stop spending money on expensive American proprietary software solutions, and start investing in a Federated Open Departmental Web Strategy. Canada needs the public sector to be championing open-source and open standards if we are going to catch up with our allies.

A cultural change is needed to make this happen. It won’t be easy, but we know that with leadership and courage that huge changes have taken place in the least likely places. Dave Rogers and Steve Marshall of the UK’s Ministry of Justice, have said that their “public code repository is an important part of our recruitment strategy.” If the government is interested in recruiting new talent, this could be an important step.

That being said, because they are built in the open, we can catch up quickly if we are able to find the leadership to make it happen.

This outline has been mostly focused on changing the technology, but this federated distributed network will allow communications departments to be more agile & responsive as well. I have trouble imagining any modern organization starting to write a web page by opening up a Microsoft Word document. The web has more than enough capacity as a publishing framework that this step simply gets in the way. Canadians expect their government to be less rigid and more timely and by decentralizing communications tools we can help make that a reality.

Having the right tools in place allows for better workflow management with proper content controls. The end result should be empirically knowing that government sites are always getting better at meeting needs of users.

Topic: Primary Image: 

A Content Management Workflow is used by Media Enterprises to have control over authorship, editing and publishing accesses and roles assignment for altering states, cycles and content types for users.

Content Workflow is also known as Content Governance Model. A Content Workflow can define the roles, responsibilities, documentations and workflow of Content. Media enterprises have responsibilities to ensure they have a smooth workflow because it usually involves a lot of processes and people ranging from the author to the editor a publisher and also a creatives team. 

A defined model of workflow involves all the stakeholders from planning…

Direct .mp3 file download.

Adam Bergstein (nerdstein) joins Mike Anello to discuss the potential need to evolve Drupal Community Governance.

Interview DrupalEasy News Sponsors Upcoming Events Follow us on Twitter Five Questions (answers only)
  1. Playing with is kids.
  2. Docker for Mac.
  3. Making Drupal 8 core an amazing experience for content authors.
  4. Holding an alligator.
  5. Working with Drupal at Penn State
Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

13 Jun Memcache in Drupal 8: how to optimize performance Kevin VB Tech

In this blog post, our technical lead Kevin guides you through the best caching strategies for Drupal 8.


Flow improvements with Drupal 8

The way data is cached has been overhauled and optimized in Drupal 8. This means that cached data is aware of where it is used and when it can be invalidated, which resolved in two important cache bins responsible for holding the rendered output, cache_render and cache_dynamic_page_cache. In previous versions of Drupal, the page cache bin was responsible for rendered output of a whole page.

Consequently, the chance of having to rebuild a whole page in Drupal 8 is far lower than in previous versions, because the cache render bin will contain some blocks already available for certain pages - for example a copyright block in your footer.
Nevertheless, having to rebuild the whole render cache from scratch on a high-traffic website can result in a lot of insert query statements for MySQL. This forms a potential performance bottleneck.
 

Why use Memcache?

Sometimes you need to rebuild the cache. Doing this on large sites with a lot of real-time visitors can lead to a lock timeout of MySQL, because the cache tables are locked by the cache rebuild function. This means that your database is unable to process the cache sets queries in time and in worst case resulting into a down time of your website.

Using Memcache allows you to directly offload cache bins into RAM, which makes cache sets, speeding up the cache along the way and allowing MySQL more breathing space.
 

How to install Memcache?

Before you can connect to memcache, you need to be sure that you have a memcache server up and running. You can find a lot of tutorials how to do this for your distribution, but if you use MAMP PRO 4 you can simple spin the memcache server up. By default, memcache will be running on port 11211.

When you have the memcache server specifications, host IP and port you need to download and install the Memcache module, available here: https://www.drupal.org/project/memcache

This module is currently in alpha3 stage and ready to be used in production sites.

Once you have installed the module, it should automatically connect to memcache using the default settings. This means that the memcache server is running on localhost and listening on port 11211. If your server is running on a different server or listening on another port you need to modify the connection by changing the following line in your settings.php.

$settings['memcache']['servers'] = ['127.0.0.1:11211' => 'default'];
Configuring Memcache

Once you have installed memcache and have made the necessary changes to the settings.php file to connect to the memcache service, you need to configure Drupal so it uses the Memcache cache back end instead of the default Drupal cache back end. This can be done globally.

$settings['cache']['default'] = 'cache.backend.memcache';

However, doing so is not recommended because it cannot be guaranteed that all contrib modules only perform simple GET and SET queries on cache tables. In Drupal 7, for example, the form caching bin could not be offloaded to Memcache, because it can happen that the cache key gets overwritten with something else resulting in a cache miss for specific form cache entries.

Therefore it is recommended to always check if the cache bin is only used to store cache entries and to fetch them later on while not depending on it to be in cache.

Putting cache_render and cache_dynamic_page_cache into memcache is the safest and most beneficial configuration: the larger your site, the more queries those tables endure. Setting up those specific bins to use Memcache can be done with the following lines in settings.php.

$settings['cache']['bins']['render'] = 'cache.backend.memcache'; $settings['cache']['bins']['dynamic_page_cache'] = 'cache.backend.memcache';
How does it work?

To be able to test your setup and finetune Memcache, you should know how Memcache works. As explained before, we are telling Drupal to use the cache.backend.memcache service as cache back end. This Service is defined by the Memcache module and implements like any other cache back end the CacheBackendInterface.This interface is used to define a cache back end and forces classes to implement the necessary cache get, set, delete, invalidate, etc. functions.

When the memcache service sets a cache entry, it stores this as a permanent item in Memcache, because validation is always checked in cache get.

Invalidation of items is done by setting the timestamp in the past. The entry will stay available in RAM, but when the service tries to load it it will detect it as an invalid entry. This allows Drupal to recreate the entry, which will then overwrite the cache entry in Memcache.

Conclusion: when you clear all cache with Memcache installed, you will not remove all keys in Memcache but simple invalidate them by setting them with an expiration time in the past.
 

Optimizing your Memcache setup

Simply using Memcache will not always mean that your site will be faster. Depending on the size of your website and the amount of traffic, you will need to allocate more RAM to Memcache.

How best to define this amount? If you know how much data is currently cached in MySQL, this can help to summarize the sizes of all cache tables and check how much of these tables are then configured to go into Memcache.

Let me give an example: consider a 3GB cache_render table and a 1GB cache_dynamic_page_cache table, resulting in 4GB of data that would be offloaded to Memcache. Starting with a 4GB RAM setup for Memcache would give you a good start.

But how can you check if this setup is sufficient? There are a few simple rules to check if you have assigned sufficient -or perhaps too much - RAM to Memcache.

  • If your evictions are increasing, meaning that memcache is overwriting keys to make space. And your hit rate is lower than 90% and dropping, you should allocate more memory.
  • If your evictions are 0 but the hit rate is still low, you should review your caching logic. You are probably flushing caches to often or your cached data is not reused, meaning that your cache contexts are too wide.
  • If your evictions is at 0 and your hit rate is 90 and higher, and the written bytes in memcache is lower than the allocated RAM, you can reduce the amount of RAM allocated to Memcache.

It is very important that you never assign more RAM than available. If your server needs to start swapping, the performance will drop significantly.


Conclusion

If you are considering using memcache for Drupal, you need to think a few things through in advance:

  • Which cache bins will be offloaded into Memcache? Only offload cache tables that do not depend on an cache entry.
  • Does the site has a lot of traffic and a lot of content? This will result in larger render cache tables.
  • The amount of RAM allocated to Memcache, depending on the amount available on your server and the size of the cache bins you offloaded to Memcache.

Also keep in mind that the allocation of RAM for Memcache is not a fixed configuration. When your website grows, the cache size grows with it. This implies that the amount of necessary RAM will also increase.
 

We hope this blog post has been useful! Check our training page for more info about our Drupal training sessions for developers and webmasters.

Pages