Connect with us

SEO

10 principles of digital accessibility for modern marketers

Published

on


When we talk about digital accessibility as marketers, we’re talking about the intentional creation of an experience that can be accessed by as many people as possible.

Designing for digital accessibility means many things. It means designing for individuals with sensory or cognitive impairments. It means designing for people with physical limitations. It means designing for individuals who rely on adaptive and assistive technologies like screen readers or magnifiers to view digital content.

The key is building accessibility into your digital experience from the very start rather than bolting it on like an afterthought. Below, I’ve outlined some key accessibility principles to consider when creating your digital marketing materials.

Principles for developers

1.  Apply standard HTML semantics

Accessible design begins with standard HTML semantics. Standard HTML enables screen readers to announce elements on page so that the user will know how to interact with the contents. When HTML tags without semantical information are used–such as <div> and <span> for visual styling – the browser will display the elements as the developer intended, which unfortunately, may not be very helpful for the user.

Keep in mind that the user’s experience with a screen reader can vary greatly. For instance, using <div class=”h1”>Introduction to Semantics</div> or custom coding to override default browser styles will produce something that resembles a header. However, a screen reader will not understand or announce that the element as a header.

Key takeaways

  • Use standard HTML whenever possible so that screen readers will maintain the structure and content when reading aloud.
  • Use structural elements to group elements and to create separate regions on a page, such as header, navigation, main and footer. Screen readers recognize these structural elements and announce them to the user and allow for additional navigation between elements.

2. Enable keyboard navigation

All websites should be keyboard accessible because not all consumers can use a mouse or view a screen. In fact, according to WebAIM Low Vision, 60.4% of survey respondents always or often use a keyboard for web page navigation. Additionally, individuals with permanent or temporary loss of their hands or fine muscle control may also use a keyboard or modified keyboards for navigation.

For keyboard navigation to work, a user must be able to navigate through a page by moving from focus item to focus item. A user typically follows the visual flow, going from left to right and top to bottom, from headers to main navigation, to page navigation and lastly to the footer. When using a keyboard for navigation, enter activates a focused link, and the space bar activates a focused form element. Tab facilitates navigation between elements. Escape allows the user to close an element.

Knowing this, it’s important to consider the actions a user might take. The rule of thumb is that if you can interact with a focusable element using a mouse, make sure that you can interact using a keyboard. These elements might include links, buttons, form fields or a calendar date picker.

Key takeaways

  • Ensure users can navigate with the keyboard to all interaction components of the website. List all your site’s focusable elements and create easy-to-use focus indicators.
  • Structure underlying source code to correctly order the content and navigation. Use CSS to control visual aspects of the elements.
  • Allow users to bypass navigation windows if there are too many links in drop downs.

3. Use attributes

When it comes to linking text and descriptions for URLs, screen readers can skip from link to link within an article. If vague link text like “Click Here” or “Read More” is used, it provides very little context or meaning for someone to interpret on a screen reader.

Be specific and descriptive with your link text and include meaningful phrases that describe the content that the link is connecting to. Instead of “Contact us” use more specific language like “Contact our sales team.” For images and videos, assign ALT attributes and use descriptive file names.

Key takeaways

  • Banish extraneous and non-descriptive words in your links like “Click Here,” “Here,” and “Read More.” “10 Principles of Accessibility” reads better than “Click here to read the 10 principles of accessibility.”
  • Optimize file names and URL names and use both open and closed captioning for video content. Consider adding accurate video transcripts.

4. Use the ARIA label attribute

In some cases, the buttons or other interactive elements on your website may not include all the information needed for assistive technology. The ARIA label attribute enables assistive technology to override the HTML labels to allow the website owner to provide additional context to the element on a page.

In the following link example, a screen reader will announce “Bing Ads. Link.”

<a href=”…”> Bing Ads </a>

However, if the button itself is a call-to-action button, the site owner can use the ARIA label to allow the screen reader to speak the call-to-action text visible on the button. In this example, the screen reader will announce, “Sign Up for a Bing Ads Account. Link.”

<a href=”…” aria-label=”Sign Up for a Bing Ads Account”>Bing Ads</A>

Key takeaway

  • Use the ARIA label attribute within elements like forms and call-to-action buttons to define the visible text that a screen reader should read aloud.

5. Properly label and format forms

Make sure forms are intuitive and logically organized, with clearly identified instructions and labels. To ensure that users load the right keyboard format for all forms, use labels that are always visible and avoid putting placeholder text within form prompts.

From a formatting perspective, take advantage of borders for text fields and drop-down menus, and put forms in a single-column format. Also, use HTML input types, so users do not have to switch across types of virtual keyboards. For example, fields for phone numbers should pull up the numeric keyboard vs. a regular keyboard format.

Key takeaways

  • Be careful when using JavaScript in forms, which can make the form difficult to complete using a keyboard.

6. Use tables for data

There are two basic uses for tables online: data tables with row and column headers that display tabular data and tables for page layout. The intended use of HTML tables is for tabular data. Layout tables don’t typically have logical headers or information that can be mapped to cells within the table, so screen readers must guess the purpose of the table. For this reason, it’s important to use CSS for layout and reserve tables for data. Using CSS results in cleaner and more simplified HTML code.

Key takeaways

  • Use the appropriate mark-up for data tables and always include table headers. Always choose CSS over tables for page layout.

Principles for writers and graphic designers

7. Write content in a structured way

The structure and flow of your content are especially important for individuals who have a visual impairment and rely on screen readers. It’s also important for folks with cognitive and learning disabilities, as well as anyone scanning through content on a mobile screen. When writing for accessibility, summon your inner high-school English teacher and organize content clearly with descriptive headings for each section.

Key takeaways

  • Make text easy to read and logically structured. Be sure to use semantic markup for headings paragraphs, lists, and quotes.

8. Align to the left

Text alignment impacts readability, according to UX Movement. Centered text makes the viewer work harder because without the left straight edge, there is no consistent path for the eyes to follow when continuing to the next line of text. Use left-aligned text for a straight edge that makes it easier for the eyes to scan content and find breaks in the writing structure.

Key takeaways

  • Only use centered text headlines and short lines of text such as quotes and call outs. Avoid mixing text alignment.

9. Choose fonts judiciously

I love beautiful, artistic fonts. But the fact is that some fonts are easier to read than others. Which is why it’s important to use basic fonts. Sans-serif fonts are easier to read for people with visual or cognitive disabilities – even temporary, visual disabilities like reading a screen in bright sunlight.

Size also matters. Avoid font sizes smaller than 12 and choose absolute units (pixels or points) vs relative units (%) to define font size. Limit the number of fonts to make content easier to read. Don’t rely on the appearance of fonts (color, shape or placement) to convey the meaning of the text. Finally, avoid blinking or moving text – no user wants to chase a message around a screen.

Key takeaways

  • Choose simple fonts with plain, sans-serif endings, which make it easier for eyes to recognize letters.
  • Limit the use of font variations and sizes.

10. Put color to work

The application of color also impacts accessibility. According to a 2018 survey of users with Low Vision by WebAIM, 75% of respondents report multiple types of visual impairment, including 61% with light or glare sensitivity and 46% with contrast sensitivity.

Think about your color scheme and the contrast of colors to ensure that text is easily discernable from the background color. The Web Content Accessibility Guidelines (WCAG) recommend using a 4.5:1 contrast ratio for normal text. To put this into perspective, black text on a white background is 21:1 whereas gray text on a white background is 4.5:1.

Using color alone to convey information may not be accessible to those with visual impairments. For example, websites often use green to signal something positive and red to signal something negative, which can be difficult to discern for someone with a visual impairment. Instead, consider combining shapes or icons with color.

Key takeaways

  • Ensure your colors have ample contrast and combine color with graphics or symbols to help convey meaning.

Designing for accessibility does not need to be complex or costly. It just takes planning and the intentional application of accessibility principles to ensure a more inclusive experience for everyone.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About The Author

​Christi Olson is a Search Evangelist at Microsoft in Seattle, Washington. For over a decade Christi has been a student and practitioner of SEM. Prior to joining the Bing Ads team within Microsoft, Christi worked in marketing both in-house and at agencies at Point It, Expedia, Harry & David, and Microsoft (MSN, Bing, Windows). When she’s not geeking out about search and digital marketing she can be found with her husband at ACUO crossfit and running races across the PacificNW, brewing and trying to find the perfect beer, and going for lots of walks with their two schnauzers and pug.



Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

SEO

Restaurant app Tobiko goes old school by shunning user reviews

Published

on


You can think of Tobiko as a kind of anti-Yelp. Launched in 2018 by Rich Skrenta, the restaurant app relies on data and expert reviews (rather than user reviews) to deliver a kind of curated, foodie-insider experience.

A new Rich Skrenta project. Skrenta is a search veteran with several startups behind him. He was one of the founders of DMOZ, a pioneering web directory that was widely used. Most recently Skrenta was the CEO of human-aided search engine Blekko, whose technology was sold to IBM Watson in roughly 2015.

At the highest level, both DMOZ and Blekko sought to combine human editors and search technology. Tobiko is similar; it uses machine learning, crawling and third-party editorial content to offer restaurant recommendations.

Tobiko screenshots

Betting on expert opinion. Tobiko is also seeking to build a community, and user input will likely factor into recommendations at some point. However, what’s interesting is that Skrenta has shunned user reviews in favor of “trusted expert reviews” (read: critics).

Those expert reviews are represented by a range of publisher logos on profile pages that, when clicked, take the user to reviews or articles about the particular restaurant on those sites. Where available, users can also book reservations. And the app can be personalized by engaging a menu of preferences. (Yelp recently launched broad, site-wide personalization itself.)

While Skrenta is taking something of a philosophical stand in avoiding user reviews, his approach also made the app easier to launch because expert content on third-party sites already existed. Community content takes much longer to reach critical mass. However, Tobiko also could have presented or “summarized” user reviews from third-party sites as Google does in knowledge panels, with TripAdvisor or Facebook for example.

Tobiko is free and currently appears to have no ads. The company also offers a subscription-based option that has additional features.

Why we should care. It’s too early to tell whether Tobiko will succeed, but it provocatively bucks conventional wisdom about the importance of user reviews in the restaurant vertical (although reading lots of expert reviews can be burdensome). As they have gained importance, reviews have become somewhat less reliable, with review fraud on the rise. Last month, Google disclosed an algorithm change that has resulted in a sharp decrease in rich review results showing in Search.

Putting aside gamesmanship and fraud, reviews have brought transparency to online shopping but can also make purchase decisions more time-consuming. It would be inaccurate to say there’s widespread “review fatigue,” but there’s anecdotal evidence supporting the simplicity of expert reviews in some cases. Influencer marketing can be seen as an interesting hybrid between user and expert reviews, though it’s also susceptible to manipulation.


About The Author

Greg Sterling is a Contributing Editor at Search Engine Land. He writes about the connections between digital and offline commerce. He previously held leadership roles at LSA, The Kelsey Group and TechTV. Follow him Twitter or find him on LinkedIn.



Continue Reading

SEO

3 Ways to Use XPaths with Large Site Audits

Published

on


When used creatively, XPaths can help improve the efficiency of auditing large websites. Consider this another tool in your SEO toolbelt.

There are endless types of information you can unlock with XPaths, which can be used in any category of online business.

Some popular ways to audit large sites with XPaths include:

In this guide, we’ll cover exactly how to perform these audits in detail.

What Are XPaths?

Simply put, XPath is a syntax that uses path expressions to navigate XML documents and identify specified elements.

This is used to find the exact location of any element on a page using the HTML DOM structure.

We can use XPaths to help extract bits of information such as H1 page titles, product descriptions on ecommerce sites, or really anything that’s available on a page.

While this may sound complex to many people, in practice, it’s actually quite easy!

How to Use XPaths in Screaming Frog

In this guide, we’ll be using Screaming Frog to scrape webpages.

Screaming Frog offers custom extraction methods, such as CSS selectors and XPaths.

It’s entirely possible to use other means to scrape webpages, such as Python. However, the Screaming Frog method requires far less coding knowledge.

(Note: I’m not in any way currently affiliated with Screaming Frog, but I highly recommend their software for web scraping.)

Step 1: Identify Your Data Point

Figure out what data point you want to extract.

For example, let’s pretend Search Engine Journal didn’t have author pages and you wanted to extract the author name for each article.

What you’ll do is:

  • Right-click on the author name.
  • Select Inspect.
  • In the dev tools elements panel, you will see your element already highlighted.
  • Right-click the highlighted HTML element and go to Copy and select Copy XPath.

2 copy xpath

At this point, your computer’s clipboard will have the desired XPath copied.

Step 2: Set up Custom Extraction

In this step, you will need to open Screaming Frog and set up the website you want to crawl. In this instance, I would enter the full Search Engine Journal URL.

  • Go to Configuration > Custom > Extraction

3 setup xpath extraction

  • This will bring up the Custom Extraction configuration window. There are a lot of options here, but if you’re looking to simply extract text, match your configuration to the screenshot below.

4 configure xpath extraction

Step 3: Run Crawl & Export

At this point, you should be all set to run your crawl. You’ll notice that your custom extraction is the second to last column on the right.

When analyzing crawls in bulk, it makes sense to export your crawl into an Excel format. This will allow you to apply a variety of filters, pivot tables, charts, and anything your heart desires.

3 Creative Ways XPaths Help Scale Your Audits

Now that we know how to run an XPath crawl, the possibilities are endless!

We have access to all of the answers, now we just need to find the right questions.

  • What are some aspects of your audit that could be automated?
  • Are there common elements in your content silos that can be extracted for auditing?
  • What are the most important elements on your pages?

The exact problems you’re trying to solve may vary by industry or site type. Below are some unique situations where XPaths can make your SEO life easier.

1. Using XPaths with Redirect Maps

Recently, I had to redesign a site that required a new URL structure. The former pages all had parameters as the URL slug instead of the page name.

This made creating a redirect map for hundreds of pages a complete nightmare!

So I thought to myself, “How can I easily identify each page at scale?”

After analyzing the various page templates, I came to the conclusion that the actual title of the page looked like an H1 but was actually just large paragraph text. This meant that I couldn’t just get the standard H1 data from Screaming Frog.

However, XPaths would allow me to copy the exact location for each page title and extract it in my web scraping report.

In this case I was able to extract the page title for all of the old URLs and match them with the new URLs through the VLOOKUP function in Excel. This automated most of the redirect map work for me.

With any automated work, you may have to perform some spot checking for accuracy.

2. Auditing Ecommerce Sites with XPaths

Auditing Ecommerce sites can be one of the more challenging types of SEO auditing. There are many more factors to consider, such as JavaScript rendering and other dynamic elements.

Sometimes, stakeholders will need product level audits on an ad hoc basis. Sometimes this covers just categories of products, but sometimes it may be the entire site.

Using the XPath extraction method we learned earlier in this article, we can extract all types of data including:

  • Product name
  • Product description
  • Price
  • Review data
  • Image URLs
  • Product Category
  • And much more

This can help identify products that may be lacking valuable information within your ecommerce site.

The cool thing about Screaming Frog is that you can extract multiple data points to stretch your audits even further.

3. Auditing Blogs with XPaths

This is a more common method for using XPaths. Screaming Frog allows you to set parameters to crawl specific subfolders of sites, such as blogs.

However, using XPaths, we can go beyond simple meta data and grab valuable insights to help identify content gap opportunities.

Categories & Tags

One of the most common ways SEO professionals use XPaths for blog auditing is scraping categories and tags.

This is important because it helps us group related blogs together, which can help us identify content cannibalization and gaps.

This is typically the first step in any blog audit.

Keywords

This step is a bit more Excel-focused and advanced. How this works, is you set up an XPath extraction to pull the body copy out of each blog.

Fair warning, this may drastically increase your crawl time.

Whenever you export this crawl into Excel, you will get all of the body text in one cell. I highly recommend that you disable text wrapping, or your spreadsheet will look terrifying.

Next, in the column to the right of your extracted body copy, enter the following formula:

=ISNUMBER(SEARCH("keyword",A1))

In this formula, A1 equals the cell of the body copy.

To scale your efforts, you can have your “keyword” equal the cell that contains your category or tag. However, you may consider adding multiple columns of keywords to get a more accurate and robust picture of your blogging performance.

This formula will present a TRUE/FALSE Boolean value. You can use this to quickly identify keyword opportunities and cannibalization in your blogs.

Author

We’ve already covered this example, but it’s worth noting that this is still an important element to pull from your articles.

When you blend your blog export data with performance data from Google Analytics and Search Console, you can start to determine which authors generate the best performance.

To do this, sort your blogs by author and start tracking average data sets including:

  • Impressions – Search Console
  • Clicks – Search Console
  • Sessions – Analytics
  • Bounce Rate – Analytics
  • Conversions – Analytics
  • Assisted Conversions – Analytics

Share Your Creative XPath Tips

Do you have some creative auditing methods that involve XPaths? Share this article on Twitter or tag me @seocounseling and let me know what I missed!

More Resources:


Image Credits

All screenshots taken by author, October 2019



Continue Reading

SEO

When parsing ‘Googlespeak’ is a distraction

Published

on


Over the almost 16-years of covering search, specifically what Googlers have said in terms of SEO and ranking topics, I have seen my share of contradictory statements. Google’s ranking algorithms are complex, and the way one Googler explains something might sound contradictory to how another Googler talks about it. In reality, they are typically talking about different things or nuances.

Some of it is semantics, some of it is being literal in how one person might explain something while another person speaks figuratively. Some of it is being technically correct versus trying to dumb something down for general practitioners or even non-search marketers to understand. Some of it is that the algorithm can change over the years, so what was true then has evolved.

Does it matter if something is or is not a ranking factor? It can be easy to get wrapped up in details that end up being distractions. Ultimately, SEOs, webmasters, site owners, publishers and those that produce web pages need to care more about providing the best possible web site and web page for the topic. You do not want to chase algorithms and racing after what is or is not a ranking factor. Google’s stated aim is to rank the most relevant results to keep users happy and coming back to the search engine. How Google does that changes over time. It releases core updates, smaller algorithm updates, index updates and more all the time.

For SEOs, the goal is to make sure your pages offer the most authoritative and relevant content for the given query and can be accessed by search crawlers.

When it is and is not a ranking factor. An example of Googlers seeming to contradict themselves popped this week.

Gary Illyes from Google said at Pubcon Thursday that content accuracy is a ranking factor. That raised eyebrows because in past Google has seemed to say content accuracy is not a ranking factor. Last month Google’s Danny Sullivan said, “Machines can’t tell the ‘accuracy’ of content. Our systems rely instead on signals we find align with relevancy of topic and authority.” One could interpret that to mean that if Google cannot tell the accuracy of content, that it would be unable to use accuracy as a ranking factor.

Upon closer look at the context of Illyes comments this week, it’s clear he’s getting at the second part of Sullivan’s comment about using signals to understand “relevancy of topic and authority.” SEO Marie Haynes captured more of the context of Illyes’ comment.

Illyes was talking about YMYL (your money, your life) content. He added that Google goes through “great lengths to surface reputable and trustworthy sources.”

He didn’t outright say Google’s systems are able to tell if a piece of content is factually accurate or not. He implied Google uses multiple signals, like signals that determine reputations and trustworthiness, as a way to infer accuracy.

So is content accuracy a ranking factor? Yes and no. It depends if you are being technical, literal, figurative or explanatory. When I covered the different messaging around content accuracy on my personal site, Sullivan pointed out the difference, he said on Twitter “We don’t know if content is accurate” but “we do look for signals we believe align with that.”

It’s the same with whether there is an E-A-T score. Illyes said there is no E-A-T score. That is correct, technically. But Google has numerous algorithms and ranking signals it uses to figure out E-A-T as an overall theme. Sullivan said on Twitter, “Is E-A-T a ranking factor? Not if you mean there’s some technical thing like with speed that we can measure directly. We do use a variety of signals as a proxy to tell if content seems to match E-A-T as humans would assess it. In that regard, yeah, it’s a ranking factor.”

You can see the dual point Sullivan is making here.

The minutiae. When you have people like me, who for almost 16 years, analyze and scrutinize every word, tweet, blog post or video that Google produces, it can be hard for a Google representative to always convey the exact clear message at every point. Sometimes it is important to step back, look at the bigger picture, and ask yourself, Why is this Googler saying this or not saying that?

Why we should care. It is important to look at long term goals, and as I said above, not chase the algorithm or specific ranking factors but focus on the ultimate goals of your business (money). Produce content and web pages that Google would be proud to rank at the top of the results for a given query and other sites will want to source and link to. And above all, do whatever you can to make the best possible site for users — beyond what your competitors produce.


About The Author

Barry Schwartz is Search Engine Land’s News Editor and owns RustyBrick, a NY based web consulting firm. He also runs Search Engine Roundtable, a popular search blog on SEM topics.



Continue Reading

Trending

Copyright © 2019 Plolu.