Usability Test for Programiz Pro: A Code Learning Platform​​​

Iowa State University | Sep - Dec 2023

Project Overview

The Project:

As part of my graduate course at Iowa State University, I collaborated with a team to evaluate Programiz Pro, an online platform offering interactive coding lessons in languages like Python, Java, and C++.

Our goal was to assess its usability and effectiveness in supporting novice programmers in their learning journey.

Summary:

  • 🎭Role: 
    Interviewer, Note-taker, Create instruments (survey, interview guide, observation log), Analyze data, Create report, Present results
  • 🗓️Timeline: Sep – Dec 2023
  • 🔎Tools Used: 
    Qualtrics, MS Teams, Zoom, MaxQDA
  • 📏Methods: 
    Moderated and Unmoderated Usability Testing, Surveys, Thematic Analysis, Sentiment Analysis
  • 🧑‍🎓Target Audience: 
    University students, Beginner-level programmers

🎯Research Goals​

We aimed to evaluate how well Programiz Pro supports novice programmers in learning Python concepts. Our primary objectives were:

  • Measure how well learners understand and remember programming concepts.
  • Identify usability issues that may hinder learning.
  • Evaluate the support features (e.g., sensAI) in guiding learners.
Screenshot of programiz pro dashboard

🛣️Methodology

Pre-survey

The survey collected information on participant demography, familiarity with different code learning platforms, learning habits, etc.

Moderated Usability Session

In the moderated usability session, participants completed two tasks while following the think-aloud protocol:

  • Task 1: 🧭Navigation within the Platform
    The platform was tasked to find a course from the course catalog. 

  • Task 2: 📖Completing a Learning Module
    Participants completed the first two modules of a beginner Python course. The participants read information, answered quizzes, and practiced coding tasks on Programiz Pro.

Follow-up Interview

After completing tasks, the users participated in a follow-up interview about their interactions with and perceptions of the platform

Post-survey

After completing the usability test session, the participants completed a post-study system usability questionnaire (PSSUQ), a 16-item standardized questionnaire widely used for measuring users’ perceived satisfaction with a product.

a decorative picture that shows the method

👩‍🎓Participants

(uses pseudonyms)

Out of 10 interested individuals, the following three participants were chosen to represent diverse perspectives.

👨‍🎓James Lorre

  • Sophomore
  • Programming Knowledge: Intermediate
  • No experience with Python
  • Learning Platforms Used to Learning Programming: Codecademy, freeCodeCamp, Coursera, Khan Academy, Udemy
  • Preferred Learning Mode: Watching video tutorials, hands-on practice, interactive coding courses

👩‍🎓Isabella Sage

  • Freshman
  • Programming Knowledge: Beginner
  • No experience with Python
  • Learning Platforms Used to Learning Programming: freeCodeCamp
  • Preferred Learning Mode: Watching video tutorials, hands-on practice, interactive coding courses

🧑‍🎓Smriti Sinha

  • Grad Student
  • Programming Knowledge: Beginner
  • No experience with programming
  • Learning Platforms Used to Learning Programming: Coursera
  • Preferred Learning Mode: Watching video tutorials, online tutorial documentation

📊Findings

Usability Questionnaire (PSSUQ)

Our post-survey, though conducted with a small sample size, revealed key insights into user satisfaction with Programiz Pro. It is important to note that due to the limited number of participants, these results may not fully represent the broader user base. 

  • System Performance received a high score of 6.72/7, indicating that users were generally satisfied with how well the platform functioned and supported task completion.
  • Information Quality scored 6.61/7, showing that users found the coding lessons clear and effective for learning.
  • User Interface Quality was rated the lowest at 6.33/7, highlighting some usability challenges. Users experienced difficulty navigating certain features, suggesting opportunities to make the interface more intuitive and user-friendly.
a graph showing survey results for three measures of survey sytem performance, Information quality and User interface quality

Thematic Analysis

We coded the transcripts based on 6 factors and sub-divided the codes into positive and
negative themes. The factors are:

  • Adequacy of support
  • Efficacy and adequacy of content
  • Experience with search and navigation
  • Interaction with quizzes and coding challenges
  • Interaction with SensAI feature
  • Interaction with the user interface (UI) components
a word cloud generated from the transcripts of usability session
Adequacy of Support – Negative
Adequacy of Support – Positive
Efficacy and adequacy of content – Negative
Efficacy and adequacy of content – Positive
Experience with search and navigation – Negative
Experience with search and navigation –  Positive
Interaction with Quizzes and Coding Challenges – Negative

None!

Interaction with Quizzes and Coding Challenges – Positive
Interaction with the SensAI feature
– Negative
Interaction with the SensAI feature – Positive
Interaction with the user interface (UI) components – Negative
Interaction with the user interface (UI) components – Positive
a donut chart showing results from sentiment analysis

Semantic Analysis

The sentiment analysis of user feedback revealed that slightly positive statements dominated (40.8%), followed by neutral (30.4%) and positive (21.6%) responses. 

This indicates that while users generally found the platform functional and helpful, minor usability issues—such as difficulty navigating or finding the sensAI feature—prevented a more enthusiastic response. The neutral feedback suggests the platform met expectations but didn’t fully engage users emotionally.

Improving key areas like navigation, feature visibility, and overall user engagement could shift the experience from slightly positive to more consistently positive, enhancing user satisfaction.

Recommendations

a word cloud with recommendations

Content

  • The users suggested making the reading material more concise. However, reducing the content might not provide enough support for beginners. Programiz could consider a default concise view with a ‘Read More’ button for further explanation, which would be helpful for beginners.

Design and User Interface

  • One user recommended addition of a ‘Dark Mode’ feature on the platform. Many users prefer dark mode for preventing eye strain during pro-longed use. Since Programiz is en educational platform that often requires prolonged screen use, it should include a dark mode feature.
  • One user also recommended the ability to customize themes within the compiler for choosing their color preference as a “fun feature”.
  • One user pointed out the lack of pictures and visual elements on the platform. While the platform inherently utilized text-based learning, a diverse media component would help attract users with diverse learning mode preferences.
  • One user pointed out that making course that the user is currently on prominent than related/recommended courses will be helpful to navigate.

Search and Navigation

  • The recommended courses list are situated at the bottom of the dashboard. During the test, none of the users navigated to the bottom of the dashboard. Consider moving the recommended courses to a more visible section at the top of the dashboard to improve learner engagement.
  • All the users had a difficult time to move to the previous lessons within a module. Sub-modules should be added as toggles to the left navigation bar instead of the previous lesson button/pagination navigation. 
  • One user found the search feature lacking and suggested improvements: adding an auto-complete feature to search bars, displaying relevant courses for each language at the top of learning path options, and using a horizontal gallery view for search responses to reduce scrolling.

Discussion Forum

  • Programiz has a separate Discord channel instead of an in-platform discussion forum. A community feature in the platform would reduce the barrier for users to seek communal support and enhance the engagement on the platform.

Quizzes

  • The quizzes have a static set of limited quizzes which might prove to be repetitive and hence, not useful when learners repeat the lesson module. A dynamic set of quizzes would increase the credibility of the knowledge assessment in the courses.

‘SensAI’ Feature

  • The biggest problem of SensAI, OpenAI powered assistant was that users did not know it is available. The feature needs to be more prominent to be noticed by users. In addition, the feature sometimes produces unreliable explanations and fails to detect errors.
  • A disclaimer would be necessary as programming novices might struggle with confusion due to misleading recommendations by the SensAI feature.

‘View Solution’ Feature

  • The view solution feature was found to be missed by two test users out of the three. The feature also needs to be more visible.