Contact Us Support Forum Get Email Updates
 
 

Thanks! Someone will be in touch with you shortly.

Rather just email us? Email us here.
Rather speak with someone in person?
Call any time with Experience API questions:

866.497.2676

Last week wrapped up I/ITSEC 2017, the largest training and simulation event in the world. This year, we exhibited and talked with people about how we help solve problems around eLearning standards like xAPI and SCORM (along with the updated DoDI 1322.26). What was notable about this year, for me, was the introduction to a new term: “system of systems.” For those unfamiliar, system of systems refers to a collection of systems that are brought together to create a new, more complicated system that’s greater than the sum of its parts. System of systems is an idea used throughout organizations or softwares, but at I/ITSEC, many people were talking about it in the context of xAPI.

It’s safe to say that every DoD agency uses multiple systems for training. They may have one or multiple LMSs, AR tools, VR tools, authoring tools, content management tools, physical simulations, in-person training..the list can go on. Because of the complexity of their ecosystem, they must think strategically about how each system works within the whole. Thus, the idea of a system of systems.

What arose in conversations at I/ITSEC was how well-suited xAPI is for supporting the creation and reporting on a system of systems. xAPI is at its core a communication protocol that helps multiple, separate pieces communicate in the same way. Using xAPI, the DoD could connect experiences from in-person training to those in an LMS.

We saw some great tools that leverage modern technology for training, particularly when it comes to AR and VR. Traditionally, each of these tools would be self-contained. But with xAPI and a systems of systems approach, each of these tools can become part of a larger plan that connects disparate systems and experiences.

We look forward to learning more about how DoD agencies (or those outside the DoD too!) use xAPI to support the creation of their system of systems. If you ever have any questions about how you can do this or how we’ve helped other clients create their ecosystem, let us know. We like talking about the standards.

No Comments
 

Last Friday, we were excited to be involved in xAPI Party. The celebration marked the end of the xAPI Cohort run by Torrance Learning (their next Cohort starts February 1) and our Director of Products TJ Seabrooks gave a demo. Since many of the folks at the xAPI Cohort were familiar with the free LRS in SCORM Cloud, we wanted to share a story of how we helped one of our clients with a super particular xAPI problem, so TJ shared how to add attachments to an xAPI course in Lectora (if you want to jump down to those directions, click here ).

This specific example comes from a client who is large and in a highly regulated industry. Before working with us, their certificates were tied to the end of each course. If a learner were to lose the certificate, they’d have to go back, relaunch the course and redownload the certificate. They needed the ability to maintain and later present certificates for learners so that any administrator in the organization could go into the LRS, view the learners’ scores and download certificates for each learner.

Our solution was to build out a simple reporting system that let them view xAPI attachments as part of a grade book reporting system. xAPI is particularly well-suited to this solution because it is reusable: any content you author can be launched from any LMS that supports xAPI and any LRS can store and fetch the attachments. This is unlike SCORM, with which you would need to build a custom solution that only works for a single system.

Since our customer was already familiar with Lectora, we provided instructions for setting an action in a Lectora course that lets us send our own custom event when we’re finished with the course. If you don’t use Lectora, you could adapt these steps to another authoring tool that supports xAPI pretty easily (and if you’re struggling – just reach out to us).

If you want to see everything “in action,” you can check out a video of TJ’s demonstration, head over to Torrance Learning’s Adobe Connect recording. In the video demo, TJ walks you through how to add the custom JavaScript code (TinCanJS and html2canvas) to Lectora, test the course in SCORM Cloud and then retrieve the certification in our custom reporting dashboard, which shares info like score, pass/fail, launch date and assessment result.

Technical steps for sending and receiving attachments in Lectora

To include TinCanJS in the project, you need to create an HTML Extension object (Figure 1) and set the “Type” property to “Top of file scripting” (Figure 2).

Figure 1

Figure 2

Click edit (seen in Figure 2 above) and add the TinCanJS file like you would on a webpage (through a <script> tag with a relative URL). Or paste the code in between <script> tags (Figure 3).

Figure 3

Something to note: any code in an HTML Extension has to be valid HTML, so if any JS added isn’t inside a <script> tag, it will break.

Next, to create the screenshot of the page, use html2canvas. You’ll need to add the html2canvas source code the same way you did for TinCanJS. If you want to capture and send the current document, i.e. the page where you loaded the JS, pass “document.body” to the html2canvas() function. This function returns a promise, so use a .then() to process the screenshot. The parameter passed to the .then() is an HTML canvas object, so to get the content and the content-type, we have to use the canvas.toDataURL() function. The format for a DataURL is as follows:

data:[<contenttype>][;base64],<content>

Figure 4

You will need to parse this string to get the content-type and the content itself (code for parsing shown in Figure 4). The content will be the raw binary data, which is what should be placed in the content section of the attachment.

In the code shown in Figure 4, cfg is the object that contains all of the other parameters of the statement. Cfg.attachments is an array that holds all of the attachments associated with that statement. Note that the code shown in Figure 4 happens after you have set up both the LRS and the other parameters of the statement in your Javascript.

Once you send the statement, you can run queryStatements() on the LRS, which will return a StatementsResult object. When you run queryStatements(), be sure to set the “attachments” flag to “true” (shown in Figure 5). This flag lets the LRS know to return both the statements and any attachments that it has.

Figure 5

Once the StatementsResult object is returned, you can iterate through its list of statements until you find the statement that has the attachment. To download the file, you’ll need to run the following code:

var link = document.createElement(“a”);
link.download = “Test.png”;
link.href = “data:”+<attachment>.contentType+”;base64,”+<attachment>.content;
link.click();

Where <attachment> is the TinCan.Attachment object whose content you want to download. This code reconstructs the dataURL, and then uses that to download the file. One thing to note is that the file extension of “link.download” should match the filetype of the attachment (if the file is a .jpeg, it should be .jpeg, if it’s a .pdf, it should be .pdf, etc.).

Customize for your needs

This is a basic guide for creating, sending and receiving attachments inside Lectora. It can be completely customized to your needs. For example, the object passed to html2canvas() can be any HTML element, not just the entire document. Therefore, if you only wanted to print a certain <div> on the page, you can pass that element instead.

Also, the TinCanJS library is designed to make sending and receiving attachments easier for the user, so the queryStatements() function handles verifying that the right attachment content is put with the rest of its data and attached to the correct statement.

If you have any questions implementing these steps or are curious about how we can help you with xAPI in general, reach out to us. In the meantime, we’d recommend checking out the free LRS in SCORM Cloud and the xAPI Open Source libraries.

No Comments
 

Mea culpa

Posted by

Categories: Announcements, News, Spec Effort, xAPI

Posted 8 December 2017

 

I’m sorry. My bad. Mea culpa.

I wrote this all the way back in September, and I told you I’d follow it up with a further post one week later. It’s now eleven weeks and one day later and I still don’t have an answer for you. To be honest, I am struggling to discern which pieces of work would best support the xAPI community and Rustici Software. We’re talking about it here frequently, and haven’t reached consensus. And for that reason, I’m not making commitments and we’re not starting the process of building anything yet.

We’re active, yes. There’s good work happening at the IEEE LTSC TAG xAPI, and we’re doing a bit of it. And ADL has published new BAA requests, and we’re considering those. But mostly, we’re patient. We’re thinking through what we could build and if it’s the best use of our energies.

So, for the time being, please accept my apologies for naively predicting I would have something conclusive to say a week later. I’ll keep trying.

No Comments
 

Last week, the Department of Defense (DoD) signed the updated DoDI 1322.26 Distributed Learning (DL). The latest DoDI advises all entities within the DoD to procure eLearning technology solutions that are compliant with the SCORM or Experience API (xAPI) specifications.

This Instruction replaces the 2006 version of DoDI 1322.26, “Development, Management, and Delivery of Distributed Learning,” which mandated (as opposed to advised) the use of SCORM in all eLearning technology used by the DoD. With the updated DoDI released, DoD entities can source the right DL solution based on their requirements, as opposed to being limited by the SCORM-focused scope of the older Instruction.

So what’s all the fuss about?

The 2006 DoDI required any DL technology to be SCORM conformant. After xAPI was released in 2013, it was hard for government organizations to purchase modern products as xAPI was not supported by the existing Instruction and there was no way to verify if an xAPI solution conformed to the specification. Now, government organizations have the flexibility to procure the right technical solution based on their requirements, and a means to verify that the products conform to either SCORM or xAPI.

So why are we so excited?

We are excited because this is the culmination of a lot of work for many people at both ADL and Rustici Software. In 2015, we at Rustici were awarded a BAA from ADL to help them revise the 2006 DoDI 1322.26. You can read more about that story on the Rustici Software blog if you’d like.

So how can I find an xAPI compliant product?

Lucky for you, ADL recently launched a list of Conformant LRSs as part of their xAPI Adopter Registry. If you’re looking to procure an xAPI conformant LRS, this is a great place to start. If you’re looking for resources about xAPI conformance, check out the official xAPI reference and support resource for DoDI 1322.26.

1 Comment
 

Part One: How We Decide to Do Work

Posted by

Categories: Ideas, Spec Effort, Standards, xAPI

Posted 21 September 2017

 

A couple of days ago, I wrote about the state of ADL and Rustici Software’s take on it. One of the real community leaders, Aaron Silvers, then shared his perspective, partially in response. If you read them both, you’ll see some overlap and gaps in our responses, but the thing I want to address is that it seemed Aaron was asking a question or making a request of me (Tim?) or Rustici Software in the process.

Important note for those unfamiliar with this space: I work at Rustici Software, a for-profit software company. Since we started working with standards in 2003, we’ve been active within the community and try to build software that spares customers having to deal with the standards. This website, like scorm.com before it, is how we interact with and provide resources to that community.

Aaron may not have been asking these questions, but in order to answer his, I have to explore two questions:

  • How does Rustici Software decide to do work?
  • How does Rustici Software decide which work to do?

How we decide to do work

There are two kinds of work that are clear yeses for us.

  • Work that serves two or more other organizations well enough that they’ll pay us enough to justify the work.
  • Work that we can do now because it helps us do our jobs better.


Number one might be pretty obvious to you. This is the essence of a products company.

Number two is a little less obvious, but just as true. Back in the SCORM days, one of the fundamental problems was that it was simply tough to tell what was going on when a LMS launched a piece of content. As good developers do, the venerable Mike Rustici added debugging tools so he could see what was going on. (Keep in mind, this was way back in the days prior to good debugging tools being built directly into the browsers.) Mike was solving a problem he had, but he quickly saw the broader utility of those debugging tools.

We listed that debugging log as a top feature of SCORM Engine from day one. We also decided that it was worth sharing with the world. We wrapped a little bit of code and interface around our core product (SCORM Engine), labeled it SCORM Test Track, and shared it. It’s been subsumed by SCORM Cloud now, but that capability brought thousands of people to Rustici Software and introduced them to things that we do well.

Those debug logs, and Test Track, have had real, lasting, positive impact, for both the community and for us at Rustici Software. If we’re going to do work that fails at number one (making money directly), then we want to have an impact.

ADL’s guiding hand

For most of the last 15 years, ADL has been the primary organizing force in the corporate elearning standards space. This force is realized in two ways:

  • ADL funds research of a specific type, with specific organizations, which causes things to happen to the eLearning standards.
  • ADL decides what is in – in scope, in the spec, in the agenda for the specification meetings. This is mostly good, because a community needs organization and leadership.

This had led to real and important work. Project Tin Can was a successful initial effort on our part, funded entirely by ADL, that led to what you now know as xAPI. Similarly, ADL funded the work that DISC did in 2016-2017 that led to an xAPI profile definition specification. This money from ADL provided incentive, and ADL’s guidance provided direction.

ADL has served as the arbiter, allowing certain things to become a part of the core xAPI specification, and pushing others into other areas (cmi5, for example). They also made decisions about which community projects to highlight, which ones to work from.

Our rules about taking work are somewhat different with regard to standards bodies. On multiple occasions over the last 3 years, work that Rustici has done and offered to the community in various ways (OSS or hosted service) has been passed over or recreated. This includes:

What should we do from here?

So here’s the crux of it: Based on the current budgetary environment in the US, ADL does not currently have the ability to fund additional research, nor do they have a large number of resources to do work in house. They have retained, however, their position of authority; they decide what’s in, or they do until they don’t.

At some point, we had to start asking ourselves this question: If ADL doesn’t explicitly approve work we’re doing for community use ahead of time with their funding, does it serve us or anyone for us to take on big chunks of work like this? Simply, under what circumstances are we willing to do work to support the community without being paid?

So I have a question for the community… for you, the reader who trudged through just this many words. If we stand up an xAPI Profile Server and a service to test for valid, well-structured xAPI Profiles, on our servers, evolving it at the pace and in the manner we see fit based on the problems expressed to us by our customers and the community, will you use it? Would you allow us to play a significant, central role in that way? And to ADL, would you approve of that?

My sense is that the community would like for us to build these things, but only under very specific conditions.

OK. I’m about 1000? words into a post and I’ve answered one question. But I’m going to stop here. The answer to this one precedes the answer to the second: How does Rustici Software decide which work to do? We’ll come back to that one in a post we publish next week.* Until then, let us know if you’re open to using tools that we build.

* Update: We are still pulling together our thoughts on which work we plan to do based on conversations with standards folks and our own internal team. This is coming, it’s just going to take a little longer than we thought. 

Have a more nuanced response? Email it to us: info@experienceapi.com.

3 Comments