Conference Banner. Bay area satellite phot courtesy of courtesy of the U.S. Geological Survey

Presentation Abstracts

Advanced Feature-Based Profiling

Gershon Joseph Cisco Systems, Inc.

This presentation introduces and discusses in detail the techniques being used at Cisco to manage complex releases and concurrent development via feature profiling in DITA 1.1. We have extended DITA 1.1's profiling functionality, using standard DITAVAL processing to achieve this. We are now putting feature profiling into practice and will report on how well it's working. Cisco is currently introducing this methodology into most of the CMS projects and believes that this technique scales to accommodate, and fully support, a wide assortment of business needs. Their process may be something other companies would like to consider doing.

Adventures in CMS Deployment

Seraphim Larsen Intel Corporation
Michael Pearson Intel Corporation

Our technical communications group has been working to deploy a content-component management system for our business group at Intel. The goal is management of DITA content for product technical information: documentation sets often exceeding 10,000 pages of content, with extensive complex reuse, and multiple topic and domain specializations.

A goal has been to make authoring and management of the content easier for the engineers and technical writers who create, edit, review, and deliver the content, while also enabling more effective delivery of content to the end users.

Along the way, we have learned all kinds of things...

  • What new skills are required for successful DITA and CCMS deployment?
  • What support do you need from other groups within the company?
  • What are the top ten things you need to make sure you ask your CCMS provider?

A Global Perspective on Authoring and Terminology

Howard Schwartz SDL

Recent industry surveys from SDL, carried out to broad range of global businesses, have revealed interesting trends in

  • The increasing adoption of XML and DITA
  • Authoring for a global audience
  • The authoring tools of choice
  • Where authoring teams reside
  • The need for global terminology management

In the past year SDL has conducted various industry surveys in conjunction with key industry bodies, such as the Society for Technical Communication (STC), on the topics of global authoring, terminology management, and automated translation. The surveys were conducted worldwide and, therefore, represent a global perspective on the trends surrounding these topics as well as how global businesses are tackling their authoring, terminology, and translation challenges.

This session looks at the results from these surveys as well as how global organizations can approach the challenges of authoring for a global audience and managing terminology across the organization.

We hope that you will come away from this session with some key industry data and food for thought as to how you can improve your authoring and terminology management processes.

Annotations for Collaboration and Quality

Robert Minard Eurofield Information Solutions

Systems for annotating documents have come a long way since the introduction of proofreaders' marks. Recently developed annotation systems include Google My Maps for the graphical annotation of maps. Less exotic systems for annotating text published as HTML, PDF, and eComPress can contribute to improved publication processes. Annotations can be used in a wide-variety of ways to provide collaboration between editors, authors, reviewers, and readers. Robert discusses how annotation systems can lead to improved publication quality and currency.

Authoring in an Agile Environment

Julio Vazquez SDI Global Solutions

Julio Vazquez gives an overview of the Agile product development environment and how it differs from a traditional waterfall model. While Agile espouses a focus on development rather than documentation, the need for good product documentation still exists. The challenge is how to develop the documentation the customers need to achieve their goals within the confines of sprints and iterations. Integrating the information and product development cycles is key to success. DITA is uniquely suited for achieving Agile goals while producing high-quality information deliverables. Julio describes some of the considerations you must address when moving to an Agile development process with your information development team.

Collaborative Writing: Changing the culture of a documentation group

Tom Bondur Actuate Engineering
Nola Hague Actuate Engineering

This presentation is about starting a process not finishing it. Actuate Engineering has shifted to an agile development release cycle. The documentation teams must now synchronize content development operations with Engineering to publish at key milestones as well as the final release. Milestone documentation is available to customers immediately for download on the BIRT Exchange website. Writers must keep their documents in a ready-to-publish state at all times as they work.

In order to keep pace and publish in rapid iterations, the documentation teams plan to migrate to an automated, XML/DITA document framework. Before making the conversion, Actuate is performing a comprehensive review and scrub of its documentation to make sure all material complies with the DITA standard. Team leads have received training in XML/DITA conversions and are starting to work on proof-of-concept pilots.

Rewriting and reshaping this large mass of material is a formidable task. Actuate products are continuously evolving with new features appearing, other features becoming obsolete, and development strategies shifting as new technologies emerge. Documentation becomes stale and out-of-date rapidly. Continuous re-use causes content to become overworked and convoluted. The documents require constant restructuring and rewriting to keep the content relevant, focused, and flowing.

At the same time, as the company expands and merges with other companies, writers and managers working in multiple locales must find ways to communicate, coordinate work, and integrate documentation sets with different standards and production processes. A writer working in isolation may not have the skill and experience required to deal with all these problems. Actuate instituted a collaborative scrum system to provide ongoing review and assistance to writers.

Getting writers out of their cubes and into a conference room to rework content in a collaborative setting is an effective way to manage the problem. However, many writers feel insecure about having their assigned books exposed, dissected, and rewritten by a team in an open setting. Establishing a supportive environment where documents are reviewed and rewritten on the spot to show writers how to transform content requires all the professional and inter-personal skills a manager can muster.

Content Workflow Design: How much is too much?

Amber Swope Earley & Associates

One of the major benefits of using a content management system (CMS) for DITA content is the ability to design the workflow support to meet the needs of your team. However, this means that you must determine what you team really needs. The goal is to find the balance between supporting the content stakeholders and controlling the content changes to reduce risk and costs. To help you find this balance, you must consider the following questions:

  • Where does your content really start and who owns it?
  • How many stakeholder roles do you have and what support does each stakeholder role need?
  • How much control do you need over content changes?
  • Who owns the responsibility for changing each workflow state?
  • What metrics do you need to capture about the content states in the content development process?
  • What point is appropriate for sending content to localization?

Come discuss these questions and learn how the answers can help you design the proper workflow support for your team.

Creating and Leveraging Customer Feedback Relationships

Shawn Benham IBM Corporation
Janet Ikemiya IBM Corporation

You know that information requirements and updates should be driven by customers. But—how do you get there if you have little customer interaction? And, even if you do enjoy a close relationship with customers, how do you ensure a focus on information/content versus product features/functions? If you want to learn a bit more on establishing such relationships, come hear how the IBM Information Management ID team created nine of them. We will tell you how we set ourselves up to make the push for these relationships (credibility and value), the overall approach we took to define and formalize these relationships (methods/process), the value we have gotten out of these relationships (both tactical feedback and strategic requirements gathering,) and where we are going with these relationships in the future (WW coverage, translation validation).

Customized Documentation through Metadata and Processing Attributes

Deirdre Longo IBM Corporation
Jenifer Schlotfeldt IBM Corporation

This presentation is an experience report, showing how one team used DITA to single source and produce an almost limitless set of customized installation documentation that matches common customer configurations. Making use of standard DITA metadata and processing attributes, the team published 18 different subset versions, each between 80 and 250 pages long) of 800 pages of installation documentation. The team can create additional combinations on demand.

The presentation will describe the scheme, how it was built, and how the team tested and built the custom documentation.

Delivering User Documentation in New and Dynamic Ways

Mark Poston Mekon Ltd.

Whilst content management and authoring are essential to the creation of any technical publication, the benefit an end user can realise is the ultimate aim. This presentation focuses on how new XML-related developments, such as XML native databases and XQuery, can be used to create more engaging deliverables for end users.

Using relevant examples, Mark shows not only how these new developments provide a more efficient means of publishing content but also how content can be delivered in ways that would otherwise be difficult to achieve. For example, how user documentation, wikis and blogs can relate to each other more effectively.

Digital Alchemy: Turning unstructured content to gold (or at least something useful)

Don Bridges Data Conversion Laboratory, Inc.

Many DITA systems don’t leverage legacy content because it’s considered too painful and/or too expensive to bring older unstructured content to the structured requirements of a DITA system. Nonetheless, the legacy content is technically sound and has application for years to come. This presentation will show what can and what can’t be reasonably done at a minimal expense to impart structure where there wasn’t any before. The tips and techniques discussed can be implemented as part of in-house or outsourced migration efforts as business drivers dictate. Beyond migration, the presentation will also show how reuse can be evaluated on a micro and macro level to build a solid ROI story for funding an XML system.

DITA CMS: Ready? Set? Go!

Anna Hartman Sybase, Inc.

Am I ready to migrate to a CMS?
You've moved into structured authoring using DITA. Information Architects adore the modularity, minimalism gives your editors a thrill, and authors thrive on finding ways to reuse their minimalistic content. Trouble is, there's so much content, you're outgrowing the file system. That's where a CMS can help.

Am I set to choose a vendor?
Where to start? Something in-house or a hosted model? A system that's been around for years or something new? An XML-based repository? A specialized DITA CMS?

Ready to go?
Sybase has walked this path (or is it plank?) and is now emerging the other side with only minor scrapes. Come to this presentation for the nitty gritty details regarding the CMS we chose, initial implementation steps and missteps, and how we are navigating some of the pesky challenges of continued implementation while successfully releasing content in myriad formats for solution-based products under tight deadlines.

DITA for Dummies

Jang Graat JANG Communications

DITA for the rest of us. Without diving head first into XML code samples, this introduction will give the audience a crisp and clear understanding of the core concepts in DITA. Using examples from everyday life, this talk will give even very non-technical listeners a good basis to start using DITA (and start reading the DITA introductions available on the internet or in print. Core concepts covered: minimalism, topic-based writing, conref, specialization, domains, constraints, transformation. All of this will be explained with examples that do not require ANY knowledge of software technology or prior experience with XML.

DITA for the Web

John Hunt IBM Corporation

As content moves more and more into the world of social media, mashups, and Web 2.0, what challenges does that bring to the management and delivery of content in these environments? How does DITA and a structured approach to content help address those challenges? What new opportunities for innovations become possible?

Attend this session to learn more about these topics and recent activities of the newly formed OASIS “DITA for the Web” subcommittee. See first-hand demonstrations of several IBM pilots in these areas, including dynamic delivery of custom content, DITA microformats, RESTful API services to manage the web delivery of DITA content, and strategies for publishing IBM product documentation in wikis.

Among the many related topics, hear about

  • using DITA content structures, semantics, and architectures in web sites, wikis, blogs, feeds, and other web delivery contexts, either as native DITA or as XHTML
  • using DITA best practices that can drive search, metadata, linking, and navigation of web content
  • using DITA semantics to support an XHTML microformat representation of DITA content on the web
  • using application programming interfaces (APIs) to make available and enable the assembly and dynamic delivery of DITA content with web services

DITA Meets BPMN: A visual approach to topic definition

France Baril Architextus

Business analysts (BAs) define processes graphically using standards like the Business Process Management Notation (BPMN) to improve business processes and set the base to develop efficient computer systems. Later, with or without access to the original model, writers create text procedures to support operations, provide training to staff, or document the use of computer systems.

This presentation explores how defining processes with BPMN can help writing teams scope content, split topics efficiently for different audiences, define standard writing practices, and understand how their work fits in relation with the work of their colleagues. France looks at how BPMN can help teams identify reuse opportunities by keeping an eye on the big picture, how it can help identify and maintain relationships between topics, and how it increases one's ability to identify the domino effect of changes in systems and procedures.

As with any approach, there are some limits. These include the complexity of reusing the processes drawn by the BAs and using a standard that does not map directly to DITA elements. This presentation provides tips and tricks to live with or get around such limitations.

DITA Productivity Killers

Frank Miller Comtech Services, Inc.

During a DITA implementation and sometimes even after, organizations find themselves asking, "Why did we bother to move to DITA?". That is because DITA contains several potential traps and pitfalls which, if not addressed, can cripple productivity and make the old desktop publishing environment feel like a more productive, less-costly place to be.

In this presentation, Frank highlights some commonly problematic areas of a DITA production environment, including conditions run amok, gaps in training, collaboration failures, overreliance on tools, and process bottlenecks. Real-world examples demonstrate the significance of each potential productivity drain. Frank presents solutions for tackling the problems upfront, rather than too late.

DITA vs Non-Technical Authors: Why and how?

Laurens van den Oever Xopus BV

DITA comes with many benefits, but creating valid DITA content and getting the right text in the right tags can be hard. Visually oriented authors are less likely to form a mental model of the underlying document structures. This problem is relevant today as DITA specializations like Learning and Training move DITA outside the scope of trained technical writers.

This session will show approaches to have non-technical authors directly write DITA content including improving discovery of available tags, preventing abuse of tags for styling, and promotion of inline tagging.

Enabling business users to create DITA content opens a wide range of new use cases. We will discuss how DITA's typical benefits like content reuse and referencing can be leveraged for compliancy, policy, and marketing materials.

Document Repositories and Metadata

Richard Beatch Earley & Associates

How do you develop a document repository for millions of documents that have no consistent structure or syntax? What if the documents are all image files such as pdfs or tiffs? It is one thing to dump them into a repository, but you will likely need to find them in the future. The answer, as we know, is found in metadata, but deploying a comprehensive and scalable metadata solution for an enterprise that can permit real findability of otherwise unsearchable content is not an easy task. Beatch explores how to approach this challenge from the perspective of the latest trends in faceted search and address the development and governance of the metadata management processes required to get a usable system up, running, and delivering business value.

Dynamic Publishing: The ultimate end-game of DITA deployments

Howard Schwartz SDL Structured Content Technologies
Chip Gettinger SDL Structured Content Technologies

There are a variety of reasons organizations move to DITA: to drive down translation costs, increase reuse, and provide robust support for repurposing content in different output types. While those drivers are compelling and often justify DITA deployments, the real end-game of DITA is only now being envisioned and realized. That end-game is the ability to deliver content dynamically.

This session both defines and explores the power of dynamic publishing. We begin by considering the various complementary understandings of dynamic publishing and the business benefits to be gained by moving to a structured authoring process and a dynamic publishing methodology. We'll explore how and why DITA is an enabler of dynamic publishing, the various ways to achieve dynamic publishing, and some technology requirements to consider. Finally, this session considers how to present the vision of dynamic publishing to management and the ROI justification behind these trends.

Those who attend this session will learn

  • Why Dynamic Publishing adoption is increasing in speed
  • How DITA and Dynamic Publishing are natural complements
  • The business benefits of Dynamic Publishing
  • The steps to get ready for Dynamic Publishing
  • Types of technologies used to achieve the vision
  • How to design best practices for establishing and maintaining content for Dynamic Publishing in multiple languages
  • How to Create Personas that accurately reflect your user base
  • Ways to measure how content is consumed and how to identify content that is not used
  • Development of Dynamic Publishing use cases

You’ll walk away with a set of best practices that you can incorporate into your daily practice.

GUI for the DITA OT

Mathew Varghese Citrix Systems, Inc.

This presentation will cover the design and functioning of the DITA OT GUI—an open source user interface to the DITA OT. The main goal of the session is to demystify the code so even "non-coders," especially information developers, can modify the GUI to suit their needs. During the session, attendees will learn to add a few basic controls to the GUI and compile the code. We will also gather feedback about the GUI and attempt to incorporate some of it.

All in all, this session will be hands-on. So if you've always dreamt of doing some of that cool “coding stuff”, here’s your opportunity!

How Effective Use of Metadata and the Resource Description Framework (RDF) Can Be an Answer to Your DITA Nightmares

Frank Shipley Componize Software

Many challenges face authors and organizations moving to DITA. Some of those challenges may not jump out at you at first. Soon you will be faced with problems that may stop you sleeping peacefully at night.

For example, breaking your content into small modular topics is great for reuse, but could easily multiply by ten or even a hundred the numbers of files you have to manage. Finding information in this “sea of content” can be a grueling task! Another example is links. In your DITA content, you will have links everywhere. Maps contain links to topics. Topics may have links to images, cross references to other topics, related links, and conrefs. Soon, you may not know what content is being used where; you may not even know what content is being used at all! These are just two examples of the many practical challenges you will be faced with as an author and as an organization.

During the presentation, you will see how the effective use of metadata along with existing standards such as the Resource Description Framework (RDF) can be an answer to these challenges. With metadata and RDF, those nasty nightmares will go away. Indeed, they may well be the answer to all of your DITA dreams!

If Only We Had Known: The snares and pitfalls of managing in a DITA environment

Catherine Lyman NetApp, Inc.
Martha Morgan NetApp, Inc.

NetApp has been playing in the DITA world for four years. What would we do differently if we had the chance to do it all over again? Where did we blow it? We will discuss infrastructure, department policy, 3rd-party partnerships, organization structure, tools, executive communication, publishing, and anywhere else we blew it and fixed it.

Implementing DITA in Phases: Out of the frying pan and into the fire?

Briana Wherry Alfresco Software, Inc.
Janys Kobernick Alfresco Software, Inc.

Alfresco Software Inc. is the open source alternative to Enterprise Content Management. Alfresco produces documentation for a number of different audiences including the open source community, enterprise customers, and partners who integrate Alfresco into their products. Alfresco is a mid-sized software company who has adopted professional documentation standards. Two years ago, the Documentation department transitioned to DITA and experienced all the challenges and growing pains associated with this new paradigm shift.

Now, in Phase 3 of the DITA implementation, the Documentation team confronts an entirely new set of challenges—ones that stretch recently acquired skills and knowledge even further. Expectations in this phase have increased with expansion into information sharing with other internal teams and external partners. This includes incorporating additional DITA customizations, improving the systems, and developing workflow processes to produce, maintain, and expand the documentation to achieve the maximum effect in the expanded environment.

This presentation includes:

Phase 1

  • How the Documentation team transitioned to DITA
  • How the overall documentation process and responsibilities of team members changed

Phase 2

  • Rewriting content to achieve reuse
  • Rebuilding information architecture to support reusability
  • Defining formal standards, processes, and guidelines

Phase 3

  • Where do we go from here?
  • Expectations versus reality

Implementing Writing Patterns Using DITA

Dana Spradley Oracle
Ultan Ó Broin Oracle

DITA provides an ideal platform for implementing a pattern language for help topics—what we’ve been calling "writing patterns" at Oracle.

The idea of using a pattern language to capture and communicate best design practices began in architecture, quickly found a home in user interface design, and has even spread to object-oriented programming. It has not made much headway in the field of technical writing, however.

DITA has helped us rectify that situation at Oracle. It has allowed us to specialize 16 different topic types from a single ancestor, each of which embodies a particular writing pattern. They include various kinds of FAQs (Why did...?, What happens if...?, How can I...?), conceptual topics (Explaining Architecture, Application Processing, Decision Support), and examples (Simple Examples, Worked Example, Conceptual Example).

Thanks to specialization, each inherits its processing and formatting code from that single ancestor, allowing easy extension to the writing pattern set whenever the need arises.

Improving Content Collaboration Using Content Management

Suzanne Mescan Vasont Systems
Charlotte Robidoux Hewlett-Packard Company

As organizations get larger and work forces become more dispersed or home-based, it becomes harder to get the right people to collaborate on new or revised content. It’s important to get input about the content from the right people at the right time to ensure accurate, usable documents for customers. A content management system can facilitate collaboration between writers, subject matter experts, and editors. However, writers are not always comfortable giving up ownership of whole documents and sharing content with others in a content management environment.

In this presentation, you will learn

  • The six principles of collaboration
  • Why content collaboration is important
  • Ways of building trust and strengthening teams
  • How a content management system can expedite and simplify collaboration between writers, editors, and reviewers

Introduction to CSS3 for Print Design and for DITA

Michael Miller Antenna House

XSL-FO has traditionally been used for formatting XML for print and PDF. For web design, CSS has been used. For companies that need to produce both print and web from their XML, they need one stylesheet language for web design and another language or tool for print design. CSS3 offers the necessary functionality to enable, for the first time, the use of one stylesheet language and stylesheet to meet the needs for both web and print output. Miller looks at the functions specified in CSS2.1 and CSS3 that make document production using CSS a reality. This presentation includes these topics:

  • Introduction
  • Considering what makes a document
  • Using CSS to design a document for print and PDF
  • Looking at current products that support CSS for print design
  • Questions and answers

Introduction to DITA in Japan

Eiji Hayashiguchi IBM Corporation

2009 was the first year of DITA in Japan. In the spring, the DCJ (DITA Consortium Japan) was begun by IBM Japan. Several DITA-related major vendors, in five DCJ working groups, have started research, investigation, and development activities to support DITA. Many companies interested in DITA attended seminars by DCJ and their interest in DITA has grown rapidly. And, many digital document-related research societies and companies coordinated to produce a symposium about DITA. Several leading companies have seriously considered the use of DITA and some of them have actually adopted DITA, although others have not. However, as a result of serious discussion, the barriers to DITA adoption have been identified. In this presentation, we discus the meaning of the year 2009 for DITA in Japan and consider the development of DITA in 2010 and afterward.

Keynote Panel—State of the Industry

JoAnn Hackos Comtech Services, Inc.
Gershon Joseph Cisco Systems, Inc.
John Hunt IBM Corporation
Jonathan Price The Communication Circle, LLC

To open the 2010 conference, join our panel of visionaries in discussing the future of how we develop, manage, and communicate information to customers, staff, and management in our organizations. Hear from four old hands how the world of information development has changed over the past 30 years. Learn how we have changed the way we work, not only by introducing new technology into our enterprises, but rethinking traditional goals and practices. Be challenged to consider a vision of the future that places knowledge, experience, and learning at the heart of a corporate success story.

Localization: Product Information Without Borders

Cindy Elliott PTC

Localized, high quality, and accurate service and support information is crucial to succeeding in global markets. Elliott details two customer case studies; Komatsu and Ingersoll Rand Club Car, highlighting their approach to technology selection and the adoption roadmap they put in place to optimize their translation and localization processes. Learn how they automated processes to centrally manage, profile, and aggregate content for dynamic information delivery regardless of output media, including print, PDF and Web-based.

Managing Careers and Motivating Your Team in the Wild World of DITA

Cindy Frakes Oracle

The transition to DITA and structured authoring requires a lot of coordination within an Information Development group as well as with other groups that interact with them. Changes in tools, infrastructure, and authoring approach are also required. If you are in management or in a lead role in the organization, you will find that many presentations are required to sell the new authoring model and tools to management, internal customers, and external customers.

With all the focus on justifying costs and new processes, what frequently gets lost in the shuffle is how this migration affects the Information Developer and Information Development Engineering team. Not only are they required to change their authoring approach, they also need to build a different set of skills to support the new model. Up to now, Information Development organizations have been very knowledgeable in understanding how to advance careers in a traditional technical writing environment, but the world of structured authoring and DITA creates a new world of challenges.

To address these challenges, Borland created a clearly defined career ladder that not only provided Information Development's individual contributors and managers with detailed information on what skills were required to succeed in the new authoring environment, but also provided the individual contributor and the management team with the tools they needed to define and accomplish career growth opportunities. This session will address:

  • The factors that shaped the development of the Information Developer, Information Development Engineering, and Information Development Management career ladders
  • How to define achievable, measurable competencies that let both the managers and individual contributors map out their specific paths
  • How to use this tool to motivate the staff to achieve their career goals and improve your team’s visibility

Moving to Topic Documentation in Two Parts: Information Architecture & Authoring Environment—A Case Study

Andrea Kuroda Marra Oracle (formerly Sun Microsystems)
Jenny Redfern Oracle (formerly Sun Microsystems)
Linda Wiesner Oracle (formerly Sun Microsystems)

The Oracle, (formerly Sun Microsystems) SPARC Technical Publications Information Architecture Team presents Part I of the case study, highlighting factors that shaped their strategy to migrate from writing linear content to writing topics for the Web. The presentation includes the business requirements; content architecture; supporting tools, training, and delivery considerations; and best practices that helped create the new user-focused, task-based information model.

The team explains the importance of creating user-centered goals as an organizing principle for delivering information in topics. Critical planning requirements, including content plans that include audience and task analysis, information typing, and linking are also covered.

The team discusses the challenge of simultaneously adopting a structured authoring tool, using the XML features of FrameMaker 7.1, which enable the information architecture. It focuses on the strategy of leveraging existing tool sets and DTDs within the company to scale resources and share content and expertise across organizations.

The Oracle, (formerly Sun Microsystems) SPARC Technical Publications Authoring Environment Team presents Part 2 of the case study, highlighting information architecture requirements that guided the move to a component content management system.

Topic-based information architectures require customizable tools for structured authoring and a flexible CMS to handle new processes. We will present the process we used to design and implement the authoring tools and content management system in our new authoring environment. We will explain key goals and milestones, demonstrate results, and outline our next steps.

Planning for the Content Management Technology Diffusion Process

Rebekka Andersen University of California, Davis

A great number of information development groups attempting to adopt a component content management (CCM) system have an overly simplistic view of the technology diffusion process. They tend to view diffusion in terms of information transfer as opposed to continuous, dynamic exchanges of knowledge, skills, attitudes, and processes across vendor and information-development group boundaries. This simplistic view, in addition to insufficient training and resources, can result in groups struggling to achieve their CCM adoption goals.

In this presentation, Andersen draws on the results of an extensive case study and research in the field of technology transfer to elucidate the planning and learning challenges one group faced when attempting to evaluate and adopt a CCM system. The presenter describes how and why the group’s as well as the system vendor’s lack of planning for the technology diffusion process resulted in a failed CCM initiative.

Real World DITA Learning and Training Samples

Troy Klukewich Oracle

In recent years, we've seen increased DITA adoptions for technical documentation, with an increasing number of adoptions in varied fields outside of software. What's next? With more companies realizing the value of structured, single-sourced content in product documentation, leading edge companies are increasingly looking at training. Already, some companies have either set up custom XML or repurposed the DITA 1.1 concept, task, and reference model for learning and training deliverables.

DITA 1.2 includes the new Learning and Training Content Specialization. Troy Klukewich demonstrates the latest work from the OASIS DITA Learning and Training Subcommittee with a focus on real world training samples. The samples provide an excellent starting point for organizations to evaluate DITA Learning and Training as a possible training solution, for tools vendors to explore potential feature support, and for anyone wanting an example of what the new specialization is designed to do (and not do).

With training deliverables covering processes well outside of technical documentation, the DITA Learning and Training specialization has the potential to reach an even wider audience of DITA adopters. Troy discusses the need for learning and training best practices, as structured documentation systems inherently lend themselves to consistent content strategies, requiring well-defined best practice definitions. He will entertain a documentation maturity model that includes product documentation and training content. Finally, he will touch on what skill sets to look for in the new world of combined and coordinated documentation and training groups.

Return on Investment of a DITA Learning Specialization—A Case Study

Tim Allen Oberon Technologies
Scott Youngblom Oberon Technologies

This presentation details the cost savings and additional revenue generation provided to a customer by implementing a DITA-based Learning specialization. It also explains the improvements and expansion of the product offerings to the customer. This case study includes:

  • A discussion of the initial challenges the customer faced, including how to justify the project costs in these tough economic times
  • How implementing a DITA-based Learning specialization helped them to overcome these challenges and automate a number of prior manual processes providing a faster time to market for their products while saving costs
  • How they were able to supply their consumers with more consistent, accurate and timely information
  • How the DITA architecture has allowed them to keep the printed content and Web content synchronized and expanded their capability to provide additional, more personalized product offerings, thereby increasing their revenue projections

Reviewing and Approving DITA Content—or—Quality Control at the LEGO Factory

Ole Rom Andersen Content Technologies

Reviewing and approving traditional, 200-page monolithic documents with 10 writers and 20 reviewers can often take 10-30 review cycles and last many weeks or even many months! In fact, this bottleneck is one of the things that DITA makes it possible to bypass or at least shorten dramatically. I want to compare the bottleneck to the way quality assurance (QA) is done at the LEGO factory:

  • QA of the individual blocks is done right next to the machine creating them: Is THIS individual module exactly right …?
  • QA of the SKU’s (the assembly sets sold) only focus on two checks:
    • Are there any missing blocks in the set?
    • Are there too many blocks?

I believe this model can be ported directly into the DITA world!
LEGO blocks = DITA topics
LEGO assembly sets = DITA maps

I also believe that the reviewing/approval process needs to be fundamentally changed from what was used on monolithic documents —to a LEGO model! This presentation will discuss the need for the change—and how it can be done.

Single Sourcing Sans a CMS

Ben Colborn Citrix Systems, Inc.
Patrick Quinlan Citrix Systems, Inc.

Citrix Education has developed a suite of tools and techniques, leveraging open-source, off-the-shelf, and home-grown technologies to fill requirements in our development process that would otherwise be part of CMS functionality. In this session, we will discuss our current development process, our vision for a future process, and our strategy for building training content using non-specialized DITA 1.1. We will discuss the tools and techniques we use to develop training in multiple languages for ILT and eLearning environments. Capabilities we will demonstrate include: simple and effective reuse using conditional tagging, applying a SCORM wrapper to HTML output, and building PPTs from DITA source. Finally, we’ll close by providing an overview of challenges still ahead, describing further gains we hope to achieve by leveraging a CMS, and answering questions.

Social Networking and the Information Developer: How to get started

Lori Fisher IBM Corporation

Think that Social Networking is for your kids or for the Marketing department at work? Think again! It is increasingly important for Information Developers to know how to leverage community and social networking mechanisms as well! Lori Fisher provides examples of a number of concrete, easy ways to get started.

Some examples of social networking mechanisms that can be applied to software product documentation to improve accuracy, completeness, and customer satisfaction include:

  • A wiki to allow customers to exchange code examples among themselves and rate contributions
  • A tool to allow customers to comment on review drafts during beta and see each other’s comments
  • Partnering with marketing teams to seed Twitter and Facebook accounts with documentation news

Strategies for Reuse and Navigation with DITA

Jon Kieffer Fujitsu Network Communications

Migration to DITA is not an end in itself. Benefits are realized only through radical overhaul of information structure to take advantage of the new opportunities that DITA offers.

This presentation describes work at Fujitsu Network Communications to leverage DITA in the redesign of very large customer documentation sets (8,000 pages typical for principal products). Kieffer focuses on the contrast between traditional reuse and navigation strategies and new and more effective strategies that can be realized with DITA. The presentation also highlights the challenges of transforming to the new strategies. The goals of transformation are to reduce publication effort while improving documentation consistency and usability.

This presentation includes practical design patterns to

  • Bring similar content under single-source control
  • Uncouple multipurpose procedures into families of scenarios with shared reusable components
  • Separate specialized content from generic content to improve reusability
  • Allow high-level conditional branching in lengthy procedures without compromising task-level reusability
  • Efficiently produce specialized variants of a topic from a generic master

Results are realized using topic-level and conref reuse mechanisms, xref and reltable linking mechanisms, Adobe® FrameMaker® 9 authoring software, and Vasont® content management software. Output is targeted in both HTML and PDF formats.

Surviving and Thriving with DITA and Content Management: Management tips and lessons learned

Sharon Fingold VMware, Inc.
Laura Bellamy VMware, Inc.

After two years of DITA implementation, VMware has 5000+ topics, translated into 3-6 languages, with 8 output targets for 56 writers in 5 countries. Learn how to plan a deployment, develop the staff, put together the tools, schedule the project, change the processes, and measure success. Sharon & Laura describe how a traditional production and tools team became an engineering, QA, and build team working with information architects, editors, and writers to meet VMware’s requirements.

The DITA Vision and Global Markets: How structured content became an executive initiative at FICO

Carroll Rotkel FICO
Elizabeth Taylor FICO

Many documentation and localization managers are becoming aware of the benefits of moving to structured, topic-based authoring with XML or DITA. These methodologies are in some cases demonstrating 30-50% efficiency gains and cost savings in localization alone. In this session, learn why the software company FICO decided to make the move to DITA, how the vision was sold to executives, and the organizational changes and technology investments they made to successfully complete the transition in less than a year.

The Future of DITA Panel

Joe Gollner Stilo International
Eliot Kimber Really Strategies, Inc.

In 2009, we heard many DITA conversations in industries outside of its origins in the technical community. It seemed almost every software tool and its associated marketing-speak promoted its "DITA capabilities." In this session, Eliot Kimber reviews the emergence of DITA in other industries. He'll discuss DITA as it relates to multilingual content and DITA specializations. And we'll get to hear his predictions for DITA's development over the next 5 years. Surveying what has been accomplished with DITA so far and what the near future holds with DITA 1.2, Joe Gollner looks even further forward to what DITA 2.0 might look like. As will be immediately apparent, DITA 2.0 expresses something more than simply a distant release of the standard; it also promises a future in which the latent potential within DITA finds mainstream applications – in much the same way we are now talking about Web 2.0. The presentation will provide a balanced review of DITA, touching upon both its strengths and weaknesses, and in looking forward to DITA 2.0, endeavours to describe what DITA will need to look like to play a broader role in the evolution of technical communication. With case studies that are exploring the distant shores of what is possible to accomplish with DITA, this talk will provide attendees with practical information about leveraging DITA today as well as offering a glimpse into what lies ahead. The opportunities and challenges that exist for DITA, and for the community with the greatest stake in the future of DITA, are substantial and, as this presentation will argue, the changes called for will be both significant and unavoidable.

The Path to Efficient DITA Translation

Elliot Nedas XTM International

Translating DITA can be challenging and advising your service providers and translators what to do is often harder than it should be. The good news is that translating DITA need not be a nightmare. Technical writers need to understand how translators think and translators need to understand how technical writers write. In the middle of this is DITA. Starting with an introduction on the difficulties faced and leading into a case study on how DITA translation is managed successfully; by using the power of DITA, in combination with a CMS and translation memory system. In brief, you will see that effective DITA translation allows you to do more translation in less time and with more consistent results.

Three Years On: Metrics and ROI and other stories from a mature DITA CMS installation

Keith Schengili-Roberts AMD, Inc.

AMD's Professional Graphics division has been using a DITA CMS for over three years and have not only been able to demonstrate effective ROI, but also how to better manage documentation processes and measure productivity. In addition to providing a more detailed bird's-eye view of the documentation process, over the past few years AMD's Documentation & Localization Manager, Keith Schengili-Roberts, also had to deal with several other process issues that developed, things that were not anticipated when his group first thought of implementing their DITA CMS. Learn more as Keith talks about his experiences in managing (and measuring) the productivity of a successful DITA-based writing group.

Translate DITA with XLIFF! But How? CMS? TMS? Tools? Yes!

Bryan Schnabel Tektronix

A wonderful thing has happened to DITA in the somewhat recent past. For quite a while, several of us looked toward DITA as a very promising approach on the horizon. But now, there is a clear, established user base, with a multitude of up-and-running applications. A wide range of use cases exist. What has not been widely adapted yet is a very fruitful best practice identified by the DITA Translation Subcommittee: use XLIFF to translate DITA.

XLIFF (XML Localization Interchange File Format) is an OASIS open standard designed by all facets of the Localization and Translation community (including translators, tool makers, implementers, technical communicators, translation customers, etc.) specifically for enabling and managing translations. XLIFF is perfectly suited to facilitate some of the more challenging aspects of translating and managing tens, hundreds, or thousands of topics. The complexity of managing multiple topics, leveraging Translation Memory, managing terminology, predicting word counts, providing alternate translations, locking previously translated strings from new “needs-translation” strings, providing seamless workflow for translators and translation tools, can be automated through the use of XLIFF. Because this practice has not yet become mainstream, many DITA enabled companies are “leaving money on the table.”

Bryan Schnabel, Co-Chair of the XLIFF Technical Committee and XML Information Architect at Tektronix, demystifies the use of XLIFF, and demonstrate some of the more efficient ways to leverage and implement XLIFF to help companies save money and time. Whether the approach is harnessed through CMS, TMS, Translation Tools, or coded into the process in other ways, using XLIFF to translate DITA will be the next big thing.

Ugly DITA

Sheila D'Annunzio STMicroelectronics
Marc Speyer Independent Consultant

If used properly, DITA is a powerful, comprehensive, and flexible standard that allows organizations to better use and reuse their structured content—and reduce costs in the process. But the comprehensiveness and flexibility of DITA can easily result in ugly DITA and a frustrated experience. In this presentation, Sheila and Marc share their recommendations from a DITA pilot project undertaken at STMicroelectronics. Sheila and Marc provide valuable insight in how to avoid and overcome problems resulting from the DITA content model, stumbling blocks in content reuse, unexpected print quality issues, adoption resistance, and implementation difficulties.

Using IBM Information Architecture Workbench

Kristen James Eberlein Eberlein Consulting

Task Modeler has a new name! However, it's still a free, graphical tool that can be used to prototype and develop DITA-based information sets. It generates graphical representations of DITA maps that can easily be understood by a wide range of stakeholders: Managers, developers, marketing representatives, and technical communicators.

I'll provide an overview of the application and demonstrate how to use it to easily and rapidly create a DITA map, DITA files, and a relationship table that links the DITA files. For people familiar with the application, the session will cover how to creatively use the tool to solve common DITA problems, as well as changes between Task Modeler 5, Task Modeler 6, and Information Architecture Workbench. Attendees will receive a handout that augments the online help by documenting how to perform all of the tasks demonstrated in the session.

Using MindTouch to Manage Documentation in a Fast-Paced Development Environment

Charles Cantrell ExactTarget
Amanda Cross ExactTarget

This presentation describes how ExactTarget uses a particular wiki (MindTouch) to automate workflow processes, as well as generate documentation for the web services API to their application. Automated workflows include assigning writing, identifying articles that have had content contributed by people outside the department requiring further review, and automating the publication of new and changed content from the development wiki to the delivery wiki. Other automated processes include producing “white labeled” documentation from branded documentation.

In the ExactTarget application, much functionality is provided through web services. To make good use of the API calls, clients must understand the objects, methods, parameters, and properties exposed by the WSDL. In the past, this documentation was developed manually, and was often out of date, incorrect, or both.

Through collaboration with the development group, code was developed that builds documentation in the MindTouch wiki that documents the relationships between the objects, methods, parameters, and properties. The code also pulls definitions of these entities from the web service. While the generated pages do not fully document the API, they provide a very strong starting point and serve as a template for the SMEs who fill in the gaps that cannot be determined by parsing the web service. By identifying all of the web service entities, the auto-generated pages ensure that important elements of the application are not missed.

With each release of a new API, the code can generate an up-to-date list of the entities and update the documentation templates without disturbing information that has been manually entered by SMEs. This process allows the API documentation to be much more complete, accurate, and timely for our clients.

Using the Task Analysis for Effective Documentation

Kristina Brinck ITT Fluid Technology

The Global Enterprise Content Management (GECM) system works to mainstream and reuse technical information across Value Centres and companies within ITT. This system uses a common technical platform, including DITA; it also uses a common documentation process with roles and responsibilities, job descriptions, Authoring Guidelines, and other steering documents. The task analysis is one important part of the GECM documentation process to ensure that all approved content is relevant, complete, consistent, clear, and possible to reuse, and that the time line set for each project is realistic.

In this session, Kristina describes the following:

  • How the task analysis is performed for entirely new outputs and for outputs with high reuse
  • How the task analysis enables a common understanding of the goals of the documentation
  • How the task analysis results in the Annotated Topic List
  • How the Annotated Topic List serves as a working plan for the writers

Kristina will also give examples of the time spent on task analysis and how it affects the time spent on the Subject Matter Experts' content reviews at the end of the documentation process.

We wanted a CMS. We were given SharePoint: Turning disappointment into opportunity

Joe Gelb Suite Solutions
John Allwein Emerson

So you’re all set to approach senior management with your request for a content management system, and they answer with “Great, our IT department just invested in a tool for that purpose! It’s from Microsoft, and it’s called SharePoint.” Not what you were hoping for, but not all is lost. There is still real opportunity to achieve your content management vision. And, what’s more, since SharePoint is an enterprise level tool, there are some great opportunities that can make it easier to achieve your ROI goals by uniting content silos across your global enterprise and facilitate more extensive reuse.

Despite its roots as a web-based document sharing tool, SharePoint can be customized as a platform to support collaborative DITA authoring, component content management, and automated, robust translation management. When mated with a publishing server built on top of the DITA Open Toolkit, SharePoint can allow pushbutton publishing of content to multiple media outputs.

The reporting and workflow features can be used to help track the proliferation of files as you break down and restructure your legacy documents into component topics. Tracking key metrics such as content reuse, development time for planning future projects, and project status against deadlines are all possible. Content can be mapped with product development documents and change management systems to better track and manage product changes that affect your content and better facilitate content categorization and use throughout the value-creation stream and product lifecycle.

When paired with a DITA-aware web deployment engine, your SharePoint development environment can port content to a customer-facing knowledge base to allow dynamic, personalized publishing and real time filtering of content to suit specific customer user needs.

This session will be based on real challenges and experiences at a global manufacturing and technology company consisting of multiple divisions and business units. We will share some of the possibilities available and provide answers to your questions about how to turn a limited system into a real opportunity.

What's "Between the Tags" ... Matters!—How Controlled Language Improves a DITA Implementation

Bob Sima Tedopres International

Controlled English is a method of writing that makes technical English easy to understand. The adaptation of a controlled language stimulates the (global) acceptance of technical documentation because it improves readability and translatability and prevents misunderstandings and misinterpretations.

During this session, content quality advocate Bob Sima of Tedopres will explain the benefits of controlled authoring using case studies and show how overall cost, time to market, and content volume will be reduced; save considerably on translations; and vastly improve your DITA implementation.

Taming the herd: Get a handle on content by managing metadata

Erik Hennum Independent

Your organization has embraced topic orientation. In fact, you have a herd of topics by the thousands to prove it. Now what? How do you know whether you've got adequate coverarge for every important subject? How do you prevent duplicate coverage and retire topics about archaic subjects? DITA 1.2 introduces metadata schemes to give you a handle on your content. You'll find out how to

  • Start simple with lists of values and scale up to powerful taxonomies when you need them
  • Follow best practices in classifying maps so your metadata is maintainable
  • Reap the benefits from basic filtering to improved retrieval and reporting and declared information architecture

Where Are They Now?

Elizabeth Fraley Single-Sourcing Solutions, Inc.

There’s a software side to dynamic information delivery. We know this. Customers who have seen IBM talk have come to us and said “Sure, they can get there, but can I?”

What if you’re not a software company? What if your paper product is your deliverable? What about the Medtronics of the world? Or the Harcourt School Publishers? Or the National Council on Insurance? What’s in your reach? What have they really achieved over the years? Did they see the ROI they expected?

Over the last year, Single-Sourcing Solutions has spent time interviewing long-term Arbortext customers to find out where they are now. We wanted to know whether our customers were realizing the full potential of their solutions. We wanted to know what data they’d collected, what lessons they’d learned, and what they’d implemented over time.

This talk highlights success stories from companies who have been doing dynamic information delivery for a very long time. Not one at a time, but aggregated together. We will include qualified, hard data on benefits, breadth of projects, and feature impact on long-term implementations.

WinANT—Simplifying and Automating DITA Publishing

Tony Self HyperWrite Pty. Ltd.

With the DITA Open Toolkit, transforming a collection of DITA topics into a deliverable format such as PDF is not a simple, one-step technical process. The transformation process, or the build, involves multiple passes of the source files to generate links, resolve conrefs, create intermediate files, and compile or assemble the publication. Although there are a few approaches, the author will generally need to write a "build file" and then process that build file using Apache Ant. This process involves hand-crafting the XML build file and typing and executing an esoteric command line.

WinANT Echidna is a Windows interface to the DITA Open Toolkit build functionality. It allows the author to select the many build options in a familiar Windows interface, browse for the ditamap to be processed, set conditional processing rules, and initiate the Ant build. The build configuration can be saved for later retrieval, and the build files generated can be used to set up an automatic publishing schedule. Diagnostic utilities also help users rectify problems with their DITA OT installation. In this session, WinANT's developer demonstrates this open source tool, and describes the ways it can be "fine-tuned" to streamline DITA publishing.

© 2010 Center for Information-Development Management     710 Kipling St. Suite 400     Denver, CO     80215
303.232.7586     info@infomanagementcenter.com