Thursday, March 13, 2008

What does poor website usability cost you?

Recently I read an wonderful article about poor usability. Its really a must read article. Here I've given for you.


Product returns in the U.S. cost a hundred billion dollars a year, and a recent study by Elke den Ouden, of Philips Electronics, found that at least half of returned products have nothing wrong with them. Consumers just couldn’t figure out how to use them.

That’s one of several interesting observations James Surowiecki makes in a recent Financial Page column in the New Yorker. Surowiecki’s piece focuses on the obstacles to creating usable consumer products, but you don’t have to leap too far to make a comparison or two to website usability.

The Philips Electronics study cited above got me wondering about the revenue e-commerce site owners leave on the table when they offer visitors a sub par user experience. While it would be somewhere between disingenuous and stupid to count every site abandon as an order sacrificed to poor usability, it’s worth considering what fraction of these missed opportunities better usability could in fact reclaim. And then think about that portion’s worth in terms of revenue, customer satisfaction and lifetime value.

Also interesting, especially in light of Surowiecki’s authorship of The Wisdom of Crowds is his explanation of why consumer electronics products continue to suffer from “feature creep” despite designers and engineers increasingly knowing better. Dependent on their position in the purchase cycle, the crowd may not actually be so wise.

It turns out that when we look at a new product in a store we tend to think that the more features there are, the better. It’s only once we get the product home and try to use it that we realize the virtues of simplicity. A recent study by a trio of marketing academics—Debora Viana Thompson, Rebecca W. Hamilton, and Roland T. Rust—found that when consumers were given a choice of three models, of varying complexity, of a digital device, more than sixty per cent chose the one with the most features. Then, when the subjects were given the chance to customize their product, choosing from twenty-five features, they behaved like kids in a candy store. (Twenty features was the average.) But, when they were asked to use the digital device, so-called “feature fatigue” set in. They became frustrated with the plethora of options they had created, and ended up happier with a simpler product.

Fascinating stuff in and of itself, and it seems like there’s at least one instructive parallel to the site design process. Just because users say they want a feature (think focus group) does not mean they’ll be ready, willing, or able to use and enjoy this feature if you go ahead and add it your site.

Whether you’re designing a cell phone, a software application or a new payment method for your Checkout, the challenge is that each additional feature demands that your user make an additional decision. That decision can be as seemingly benign as “Can I afford to ignore this button?” or as expletive inducing as “What the &*%#! is this third scroll bar for?” But what can you do, ignore your customers’ requests? There’s no perfect answer. But your implementation can make a difference and user testing early and often significantly increases your chances for success.

Monday, March 3, 2008

Usability Issues on shutterstock.com

I am using shutterstock.com for downloading corporate images. Due to bad usability of downloading , I need to wait for long time to download some images. I need to cross following pages.
1. Searching
2. Select an thumb size Image ( we can select only one image at a time)
3. It will show the selected image in big size . Click to download at various resolution.
4. Type the security key & Press "Enter".
5. Again search it

If I want to download 5 images at a time in same catagory, I have to do all the above five steps for everytime. It would take more than 20 mins even my browser much faster. Its a good example for BAD Usability.
Lot of simple solutions to fix this usability issue.

My solution is
1.Select multiple images at search result page.
2. List the selected images with (default is Medium resolution) resolution. If you want to change the resolution, change it there by multiple or single. Give a security code on the same page. ( convert all image files into a zip file ) download it.

If you have any other solutions, Post your comment here.

Wednesday, February 13, 2008

Usability Joke

Recently I read an article about usability engineering. Its really fun and fantastic examples for lack of usability engineering.
I agree with that auther. 80% of software company doesn't care about usability. Most of my friends in software industry don't know my role.
Following conversation always happening with my friends ( s/w professionals).

Friend : What are you doing now.
Me : I am working as a Usability Engineer in a s/w company.
Friend: What is that?
Me : You know How google is being first in Internet industry ? Why not yahoo?

( I will start the usability story from there..)ok come back to the topic. Read this article here..http://www.ataricommunity.com/forums/showthread.php?t=633417

Realize how important usability is.

Tuesday, January 29, 2008

Competitors of Usability Manager

Today I found few tools like my dream tool, Usability Manager.

First one is CarettaSoftware's GUI Design Studio 2.4. Its a specialised tool for Software Designers, Analysts, Usability Engineers, Project Managers and Consultants.

Second is Axure RP pro. It is dedicated to helping others design applications that are more useful and more usable. Similar to GUI Design Studio.

My UsabilityManager is more advanced tool for usability engineering. It covers not only Image Prototype, WireFrame and specification, but also can do Usability Testing, Heuristic evaluation in a unique way.

The Usability Testing report will have complete details of End User actions. And It will suggest where will you improve the usability design.

Monday, January 21, 2008

A prototype for Usability Engineering Tool

Being an Usability Engineer, i searched a lot for a complete Usability Tool. I got a few, but they are limited with their functionality. Later i thought of making use of zoho products to build such a Tool.
For Example, Zoho Creator can be used for conducting "Usability Reviews". Zoho Wiki can be used to gather "Usability Requirements". Toondoo can be used for "Usabiity Prototype Designing" ( Both WireFrame and Architecture ).


By using these ideas, I have done a prototype for this Usability Tool.


How this tool will perform ?


I. Usability Requirement Collection :



1 Select an existing project or Create a new project.



2 Create modules and write the requirement details for that modules



3 Create child module(s), if any.





II ) Usability Prototype :















1 Image Prototype ( Architecture, Web Interface, Java Interface, .Net Interface etc., )














2 HTML Prototype


III. Usability Testing :

1. Install the Browser Agent in both the Server and Client systems.


2. Write the Usability Test Case and send it to the usability testing participants ( optional )



3. Record the Usability Engineer's Actions on the Web Application ( That means the expected result )



4. Connect the Server and conduct the Usability Testing by participants in Client systems.



5. This Tool will compare the Expected Result (UI Engineer's actions) with Usability Testing results (Participant actions) and generate the result.


6. Usability Engineers can view the participant's actions by their recorded scripts.




IV. Usability Review :

1. Using a simple form, collect the Usability Issues details.

2. Using these details, Report page as well as few charts (like Severity Graph, Issues Graph and Location Graph) can be generated.


3. Write a Review document and report it to your head.

Monday, January 7, 2008

An Interesting imagination about web 2.0

Recently I read an article about " Future of web 2.0" . Author "Nat Torington" describes the web 2.o future upto 2022. But I think the life span of web 2.0 won't be long. Some other new technology will be beaten in future. But It is purely depending on browser's revolution and usability needs.

The Future of Web 2.0
I've been in Singapore this week, giving presentations on Web 2.0 and helping the government's Infocomm Development Agency with their plans to foster startups in the country. I often get asked about the future of Web 2.0--is it a bubble, when will it be replaced by something new? Fortunately we've done a lot of work on this at the O'Reilly Radar lately, and I'm able to lay out a clear vision of the future for them. It goes something like this ...

2004: Web 2.0 coined, the movement named.

2006: "You" named TIME Magazine's "Person of the Year", a tribute to Web 2.0.

2007: You are here.

2008: Firefox 3.14159 ships (those geeks at Mozilla just won't be able to help themselves, and the resulting flamewar and developer resignations over whether to call it "PiFox" or not will lead to it being dubbed "PyreFox"). This version adds offline support to Ajax web applications. People will want to call the result "Web 3.0" but that term was claimed in advance by the Semantic Web so the blogosphere will quickly decide to call this Web 2.86 but the period will be quickly quickly lost (to the condemnation of purists) and the media will refer to "Web 286".

2009: Semantic web researchers develop a deductive calculator that solves arbitrary problems using the math knowledge encoded in the web. It would be heavily adopted by school children to solve their homework but it will require the problems be expressed in TeX markup and the only papers to have been expressed in the format will be from a obscure Russian grad school that specializes in the geometric expression of information theory results in Riemann spaces. The imminent arrival of Web 3.0 will be predicted.

2009: The fascination with widgets leads Firefox 4 to integrate with the native operating system's desktop to offer a new cross-platform widget environment. Out of respect for the diligent workers still building the Semantic Web, it is agreed that we'll reserve "3.0" for their work. Bloggers skip that number and go straight to Web 3.1.

2010: Semantic Web developers release a new XML format. This will be hailed as the final step to the completion of Web 3.0.

2010: The growing proliferation of complex user interfaces built with Ajax, Flex, and anything else web developers can get their hands on will lead to growing calls for standardisation. The W3C will fail to be able to do this, but a consensus API and widget set from the leading Ajax toolkits will emerge and be implemented by Firefox 5 and then, several months later, IE 12. Because it's almost 100% of the way to Tim's vision of the Internet Operating system, he'll argue (and win) that it should be called Web 95.

2011: Semantic Web researchers will unveil a game that tricks kids into adding tuples to an RDF data store. Despite being hailed as the gateway to Web 3.0, all that will result is the world's most complete database of Pokemon characters.

2012: The clandestine Mozilla thin client system will launch--an entire bootable platform, built on Linux, that only exists to run the web server. The war over whether it should be GNOME or KDE will have been settled by producing two versions, which hampers adoption rates initially. Finally the two projects will be forced to merge or kill the best chance they have to overthrow Windows ("2011 is the year of the Linux desktop", headlines will pronounce) and the GNODE (as it will be known)-powered Firefox 6 will sweep the world. Headlines pronounce Web 98.

2013: A long delay will pass without much innovation, during which time Firefox 6 will achieve near-complete market penetration before the arrival of malware targeting it. The Mozilla team, caught between bug fixes and new features, will struggle to finish Firefox 7. Their solution will be to acquire Opera and release it on top of OpenSolaris as an "Enterprise-ready Firefox". The resulting fragmentation of the web (thought to be over after IE 13 was retired when Microsoft turned into a pure services company in 2012) results in chaos. Mozilla will promise but never-deliver a web portability tool dubbed "No Trouble", and in its honour wags will dub this era "Web NT".

2013: Semantic Web researchers will unveil a new RDF database for Java 6 Enterprise Edition ("Raging Marmot"). Parties will be thrown in honour of the arrival of Web 3.0.
2015: Mozilla will EOL their Opera line, integrate the few remaining features they liked into Firefox. To win back the faith of their users, they'll invest heavily in designers and UI specialists. The resulting focus on user experience will cause this incarnation of the web to be known as "Web XP".

2020: After many years of development and malware fighting, Mozilla will drastically revise downward the feature set for Firefox 7. They'll skip version 7, and release "Firefox X". X will support RSS for blogs, IM, twitter, and the new communication system that flashes updates from your friends every 2 seconds in yellow on black 64pt type as you work. "Crack", as the system will be called, will be so addictive that it drives sales of Firefox X through the roof (the Mozilla Corporation will have burned through its cash reserves attempting to get Firefox 7 out the door and must consequently put a price tag on X). From the profits and resulting IPO Mozilla will launch the Mozilla Benevolence Fund, dedicated to solving disease, eliminating hunger, and spreading warm fuzzies in the third world. Within six months, the body count of Crack users found dead in the glow of their screens (unable to leave because of the addictive sense of connection provided by Crack) will turn public opinion and Wall Street against Mozilla but the genie is out of the bottle. As the corpses stack up in city streets, the professional time-wasting class known as Knowledge Workers will have been eliminated from the world. We'll return to a hunter-gatherer-like society in which the strong survive and the weak are feasted upon. As civilization crumbles, bards will proclaim we are able to see into a new world that's free of offices and cities, a world where mankind lives in rolling green fields and cloud-filled skies. In honour of this view of the world, the bards proclaim the age of Web Vista.

2022: The last Semantic Web researcher announces a Sudoku solver that operates on RDF-expressed puzzles. The failure of the last functioning laptop (a milspec Pentium from 2008) is all that prevents the arrival of Web 3.0.

Thursday, January 3, 2008

Competive Analysis of Usability - An worthy article

Conducting a competitive analysis is an important part of the job if you're a usability engineer or information architect. A good competitive analysis not only produces usability metrics but also aids decision makers in their strategic goal-setting and planning. Done right, a good competitive analysis can steer a Web development project in the right direction.
The day will come when you're sitting happily at your desk and someone from marketing or business development will come into your office and ask you to do a competitive analysis for them. The company is launching a news site or portal, and the decision makers want to be sure that their site will stand up to the competition.
Suddenly, you're not just in the world of usability and information architecture -- of theories and deep thinking about cognitive psychology. You're now in the rubber-meets-the-road world of business. Although you'll be doing old-fashioned usability analysis work, you're also expected to guide the team toward increasing return on investment. You're expected to provide baseline readings from which to measure success. And you're expected to help the team snoop out what the competition is doing.
If all this sounds a little out of your league, don't worry, because it isn't. Let's start with the basics.
First things first

The first thing to realize is that a Web site competitive analysis is usually performed for a team of business specialists who know nothing about design, usability, or information architecture. They don't have a clue about labeling systems, search ergonomics, or affordance. All they want to know is what the competition is doing and how they can do it better. Obviously, your expertise is in usability and user experience design, so you'll be evaluating sites along the lines of your domain expertise, but the data you gather must always point toward making a smart business decision.
Your audience will also expect a presentation and a written report. The presentation can knock the tops off the mountains, but the report better have some detail in it. They expect your findings to be well organized, moving from executive summary to appendixes loaded with relevant details.
The end result of your analysis is a decision -- a business decision that affects the rollout of design and development. Your recommendations could be as "trivial" as adding search functionality and a look-and-feel upgrade to an already crowded deployment schedule, or it could have more far-reaching ramifications, such as adding to a budget for content acquisition or a shift in messaging. That's why your conclusions are so important. Arriving at them is not just an academic exercise.
Next we'll discuss who and what you'll be analyzing.


Who's the competition?

It's very likely that you'll be given a list of competitors. Every company that has a handle on their market space knows who the competition is. And just about every company has a list of companies on their "target list" -- that special subset of companies that they want to beat soundly in the marketplace.
Regardless, the list you get will likely be incomplete. That's because the people giving you the list will have their "business" hat on, not their "functionality" hat on. For example, if the company you're doing the analysis for is in the freight cargo business, you're likely to get a list of other sites or portals belonging to companies in the same business. However, it might be smart to add sites like travelocity.com, which specializes in consumer travel, because their site contains functionality that might be universal to all transportation applications (i.e., departure and destination points are common to freight trucks and airline customers).
Along with a list of competitors, you'll likely get a list of items that they want you to focus on, or at least, a list of items they want to do better than the competition. For example, the team might be fixated on the number of content items deployed on their own site. If Competitor X has 500 content items, they'll want to know how many content items Competitor Y and Competitor Z have. The subtext will be, "How fast can we have more content items?"
Resist any impulses to follow subtexts at this point. To follow our example, you might dig deeper and find out that those 500 content items deployed on Competitor X's site are outdated, badly written, and generally not useful to their audience.
If the company you're doing the analysis for doesn't know who the competition is, then you'll need to do some sleuthing. Find out the company's Standard Industrial Classification (SIC) code and then look up other companies in that same category. Try to find out what the company is striving to achieve with their own Web offering and match targets appropriately. Some relevant criteria for determining worthy adversaries are geographic location, total revenues, total profits, and strong branding.
If you're the one drawing up the list, always check with someone at the company who is in-the-know (usually someone in marketing or business development). This can save you lots of pain and heartache later, and could also save your credibility when you deliver the results of your work.

What to analyze

Now that you have a list of competitors, you need to draw up a list of items to analyze when you visit their sites. I've developed a categorized list of items over the years, which are included below:

Home page. How informative is the home page? Does it set the proper context for visitors? Is it just an annoying splash page with multimedia? How fast does it load?
Navigation. Is the global navigation consistent from page to page? Do major sections have local navigation? Is it consistent?
Site organization. Is the site organization intuitive and easy to understand?
Links and labels. Are labels on section headers and content groupings easy to understand? Are links easy to distinguish from each other? Or are they ambiguous and uninformative ("click here" or "white paper")? Are links spread out in documents, or gathered conveniently in sidebars or other groupings?
Search and search results. Is the search engine easy to use? Are there basic and advanced search functions? What about search results? Are they organized and easy to understand? Do they give relevance weightings or provide context? Do the search results remind you what you searched for?
Readability. Is the font easy to read? Are line lengths acceptable? Is the site easy to scan, with chunked information, or is it just solid blocks of text?
Performance. Overall, do pages load slowly or quickly? Are graphics and applications like search and multimedia presentations optimized for easy Web viewing?
Content. Is their sufficient depth and breadth of content offerings? Does the content seem to match the mission of the organization and the needs of the audience? Is the site developing its own content or syndicating other sources? Is there a good mix of in-depth material (detailed case studies, articles, and white papers) versus superficial content (press releases, marketing copy)?

I provide a rating for each question on each site visited: 1=bad, 2=poor, 3=fair, 4=good, 5=outstanding. Naturally, you may want to tweak this scale to fit your needs, but it's important to have some kind of scale to make the job of comparison easier. The list of resources contains links to other criteria you can use.

Conducting the analysis

Now that you have a list of sites to visit and a list of criteria to compare, start your analysis. Be sure to conduct your analysis with some rigor. Don't be haphazard, and don't do things differently with each site visit. Try to analyze a site without interruption. In other words, do everything you can to reduce bias in your investigation.

Here are some additional guidelines:
Visit one site at a time, and take the same (or at least, similar) paths through each site. Follow the checklist of criteria.

For each criterion, take lots of notes. You'll refer to these notes when you organize and write your report.

Try to give a score for each criterion as you complete them. That way you'll have scores for each major category as well as for each site.

If the company that you're doing the analysis for has an existing site, then remember to rate them last. After visiting the company's competitors, this will give you some sense of objectivity. This also provides a good measurement comparison for the readers of your report.
When you're ready, you'll need to do some number crunching. Although a discussion of statistical methods could easily fill several books (and has), there are, at minimum, a handful of important calculations to make for each site:
Mean: The mean is derived by adding all values in a set and dividing by the number of items in the set. For example, in a data set comprising scores of 3, 4, 4, 5, 3, 2, and 5, the mean would be 26 / 7, or 3.72.
Median: The median is derived by lining up all values in a data set from smallest to largest and picking the one that's right in the middle. To continue our example, in a data set comprising values of 2, 3, 3, 4, 4, 4, and 5, the median would be 3.5 (with an odd number of values, split the two around the middle). Some feel that the median is a better representation of an "average" score, but I think that using both the mean and the median give you a better overall picture.
Mode: The mode is derived by calculating the highest frequency value in a data set. In our example data set, the mode would be 4 (there are more 4s than any other value).
Maximum, minimum, and spread: The maximum value in a data set is the largest value, and the minimum value is the smallest. The spread is the difference between these two values. To complete our example, the minimum value is 2, the maximum value is 5, and the spread is 3.
Together, these values (mean, median, mode, maximum value, minimum value, and spread) start to tell a story. They don't tell the whole story, but they certainly illustrate and make plain the results of your work. For example, Web sites that have means and medians that are far apart indicate more weight on extreme ends of the scale (either more 1s or 5s in the established rating system). Mode values that are significantly different from medians and/or means also indicate clumping of values away from the normal, expected curve. Web sites with large spreads between minimum and maximum values might indicate a high level of inconsistency in the different portions of the site; in other words, a site might have poor search functionality but excellent content organization and site navigation.

You must remember one thing: the numbers you assign to any part of a Web site are, as much as you'd hate to admit it, somewhat arbitrary. Although you may be an expert at usability or information architecture, any number of factors can cause bias to enter the process. You might be in a hurry, have a pressing deadline distracting you, or your mind may wander while you're finishing an evaluation. You might be evaluating a Web site belonging to a big competitor, and there may be some tacit pressure to downgrade any scores you give them.

Be as fair as you possibly can, and make it understood that the numbers you assign are subjective scores, not the results of ironclad science. They're assigned and used primarily to have something quantifiable to point to and discuss, instead of just guesses and raw opinion.
You can perform this task of crunching numbers manually or with a spreadsheet. Excel and other spreadsheet tools provide built-in functions for calculating means, medians, modes, and other statistical values.

Writing the report

Eventually, you'll need to take all your notes and all those numbers you've crunched and put them in a report. Most usability engineers and information architects I've met would rather do anything than write, but this is one case where what you write is as important than all the other work you've done.

Why? Because your report will be used by decision makers, and I don't mean as filler in their inbox, either. They'll read it, digest your findings and conclusions, and try to make decisions that affect company strategy -- or at least, Web site deployment strategy.
Writing a report isn't that difficult; in fact, it's about the easiest piece of writing that you'll ever undertake. Why? Because a report is very structured, and the structure can aid your writing. A good report shouldn't contain any surprising twists and turns. In fact, the readers of your report will be expecting something along these lines:

An executive summary, which contains a summary of your report. You'll probably write this section last. Subsections of the executive summary should include a section summarizing why you undertook the analysis, a summary of the sites' rankings, and a summary of recommendations for further action.
A methods section, in which you explain the methodology you employed for selecting and rating the sites, including what criteria you looked at. This section provides insight into your thinking when you undertook the analysis.
A findings section, in which you summarize your findings for each site. Start each subsection with the name of the site, the site's URL, and the overall score for the site. Then go through each part of the site and describe how it ranked, including a site section score. Do this for each site. The findings section will comprise the bulk of your report.
A discussion & recommendations section, in which you provide future direction for the team. This is the appropriate section to mention integrating other sites' best practices to the site being deployed by the company.
One or more appendixes, in which you provide detailed information. It's appropriate to list raw data of your findings here.
As for process, the best approach is to create a file in your favorite word processor and fill in all the headers that mark the sections. This sets up an informal outline that you can "fill in" as you go. My advice would be to write the methods section first, as you know what methodology you employed. Writing this section first will loosen you up and get the writing flowing.
Next, write the findings section. This section is the longest of the entire report and will take you at least a day, if not more, of solid work to complete. Once you've finished with the methods and findings sections, knock out your recommendations and then complete the executive summary.
Add the appendixes to the back, and let the report rest for at least a day. Then go through it again, from top to bottom, and clean up the verbiage. Remember that shorter is better. If you can say something in 10 words, find a way to say it in 7 or 8. Cut out as many adverbs and adjectives as possible. Remember that those reading your report will want to get to the heart of the matter and won't appreciate flowery language.
When you're happy with it, give the report to someone else and have them review it. Don't pick a pushover or someone who will return it with hardly any comments, either. Pick someone with a discriminating eye -- someone who will ask lots of questions and nitpick.
The more you cover in your report, the less stupid you'll feel when you give your presentation.

Giving the presentation

Giving a presentation strikes more fear into people's hearts than writing does. Usually, this fear stems from nervousness, not knowing the subject matter, or fear of boring the audience. However, the kind of presentation you'll be giving isn't any cause for concern, because all the obstacles have been removed for you:

You know the subject matter intimately.
Your audience is genuinely interested in what you have to say.
The subject matter is bound to captivate the audience.

When you give your presentation, avoid the impulse to talk to the slides. Instead, use the slides as visual confirmation of what you're saying. Speak with an easy, even tone, as though you were telling a group of friends something important.
I personally don't believe in using Powerpoint slides whenever I give a speech, but for this kind of presentation, you'll need a few well-chosen slides that highlight your findings and recommendations. My advice to you is to create 5-7 slides with bullet points and/or data tables for this purpose.
Start by introducing yourself and then launch into why you performed the competitive analysis. As with writing, providing this information first will loosen you up; after all, you know both of these topics very well.
Next, talk about your methodology, and get on to the findings as soon as you can. Don't give a blow-by-blow of each of your findings -- instead, summarize, and use visuals to punctuate your summaries. For example, instead of talking about each segment of each site, provide a summary of where each site succeeded and failed, and provide that information as a table on a slide.
Finally, follow with your recommendations, and then open up the floor to Q&A. With any luck, the process of analyzing competitor's sites, writing (and polishing) the report, and rehearsing your presentation will mean that you're well prepared for any and all questions. If you do get a question you don't know the answer for, don't squirm, equivocate, or sidestep. Tell the audience that you don't know the answer to that question and that you'll find out. Then follow up appropriately.
You can distribute the report as an email attachment or as hard copy at the meeting, or both. It's my opinion that handing out a hard copy report is good, as this gives the decision makers something tangible to hold. Don't give out copies of the report until the end of the presentation; otherwise, you risk having your audience looking at the report instead of listening to you.

Summary
Conducting a competitive analysis is an important part of your job as a usability engineer or information architect. A good analysis and subsequent report can provide the necessary information to influence a decision regarding Web site deployment. Done right, a competitive analysis can steer a team in the right direction, as well as lend credibility to your career and position in the company.

Source

Tuesday, January 1, 2008

50 Best Websites 2007 - by Time.com

Recently Time.com released a list of best websites 2007. And They listed Top 25 websites under " We can't live without it" type . Google got the first rank among them. It deserved to have this honor. But still they need to improve their service especially Gmail.

Ok. Visit Here

Happy New Year 2008


Wish you happy new year 2008 to all usability professionals and students.