Det har ofte vært antatt at de som er unge i dag tilhører en generasjon som har spesielt høye ferdigheter innen IKT på grunn av intens bruk av digital teknologi fra barndommen av. Ifølge denne tankegangen kan den yngre generasjonen ses på som "Digital Natives" - innfødte i den digitale verden - i motsetning til de eldre, som er "Digital Immigrants". Men er det virkelig slik?

I denne artikkelen undersøker Tore Ståhl førsteårsstudenter på bachelorprogrammene ved et svenskspråklig universitet i Finland høsten 2011 og 2012. Ståhl sorterer respondentene inn i fem ulike IKT-clustre basert på deres "use practices", og sammenligner deretter nivåene av IKT-kompetanse mellom disse gruppene. Dataene viser at unge mennesker er en svært heterogen gruppe når det gjelder IKT-kompetanse, og forskjellene viser seg både internt i brukerclustrene og mellom dem.

Det gir dermed lite mening å snakke om et generasjonsskille mellom en yngre generasjon av "Digital Natives" og en eldre generasjon av "Digital Immigrants". Skillet går derimot mellom dem som har mulighet til å utvikle ferdighetene sine til et slikt nivå at de får nytte og glede av IKT i for eksempel høyere utdanning, og dem som ikke har denne muligheten. Dette skillet går på tvers av generasjoner, og må anerkjennes slik at man kan gjøre noe med det.

How ICT savvy are Digital Natives actually?

The ICT skills among the young generation are diverse, limited and do not necessarily match the requirements in higher education studies, increasing the risk of a digital divide. All users do not manage to develop their skills to such a level that they would be able to fully utilize the advantages of ICT, for example, in their studies. Acknowledging these divides is a necessary step for taking measures to overbridge them.

The purpose of this article is to explore how habits of using Information and Communications Technologies (hereafter ICT) and actual ICT skills relate to what has been called -Digital Natives.

The present study explores Digital Native-like people and other groups among two cohorts of students in their first year of university, contributing to the overall picture of Digital Natives as part of the young generation. The study combines survey data describing ICT and media use with test data describing performance-based ICT skills.

The Digital Natives debate

During the first decade of this millennium, the growing generation was in the focus of an extensive debate in terms of a so-called Net Generation (Tapscott, 1998), Millennials (Howe & Strauss, 2000) and Digital Natives (Prensky, 2001a; Prensky, 2001b). Jones et al. (2010) provide a comprehensive overview of the terms used.

The common denominator for many advocates of a digital generation was that they attributed the members of the young generation with different characteristics that they maintained were a direct outcome of technology use, and they generalized the suggested characteristics to apply to the whole age cohort.

Almost concurrently with this debate, George Siemens (2005) presented his learning theory for the digital age, suggesting that learning will be different. To some extent, the ways of acting and learning suggested by Siemens resemble the characteristics attributed to the Digital Natives.

For several years, the public and the academic rhetoric accepted the thought of a whole generation being homogeneous regarding both ICT skills and ways of using and relating to ICT. Eventually, critical voices (e.g. Best & Kellner, 2003; Lee, 2005) appeared to challenge the over-generalizing rhetoric, now suggesting that the Net Generation may be even more heterogeneous than any previous generation.

Still, Best and Kellner (2003) pointed out that this generation is indeed the first one to grow up surrounded by the internet, multimedia and new media. It might be added that the Net Generation also lacks a personal experience of the time before the internet, search engines and mobile phones, not to mention smartphones.

In the Digital Natives rhetoric, the simplified picture of homogeneous generations has been used as an overriding explanatory factor. However, drawing upon Mannheim (in Buckingham, 2006, p. 2), the Digital Natives will initially have had ‘similar life chances’, but by the time they enter higher education they will have had different experiences and they will have made different things out of their life chances. Thus, a heterogeneity seems in-evitable.

Digital Natives characteristics

The intensive debate during the last decade did not produce a clear definition for Digital Natives. Different characteristics were suggested, and these will serve to describe the Digital Natives concept.

Prensky (2001a; 2001b) maintained that Digital Natives are used to receiving information fast, they like to parallel process and multi-task, prefer random access rather than structured information, function best when networked and prefer games to “serious” work.

Digital Natives were described as ‘just-in-time learners’, knowing where to find information once they need it. Their process of thinking relies on social network navigation (Anderson & Balsamo, 2008, p. 244). They are committed to a culture of sharing, for example pictures, status updates, likes, and so on. (cf. Horrigan, 2007; Kennedy, Judd, Dalgarno, & Waycott, 2010). They are ICT savvy, and they are heavy users of a multitude of technical devices (e.g. Tapscott, 1998, p. 40, 99; Prensky, 2001a; Horrigan, 2007).

The characteristics were about use preferences and habits, but also about ICT skills, connected by the assumption that heavy use of devices and ICT skills nourish each other. Throughout the debate, the characteristics were presented in a generalizing manner, suggesting that all members of the young generation are ICT savvy and constantly connected, but the question needs to be asked whether they are.

Digital Natives, generations and ICT

Several research projects have explored and questioned the existence of a homogeneous Net Generation with a general net savviness, and the results more or less put an end to the oversimplification and generalization (see Helsper & Eynon, 2010; Jones & Hosein, 2010; Jones et al., 2010; Lai & Hong, 2015; Litt, 2013; Thompson, 2013 for informative overviews).

Briefly, the main findings of the aforementioned studies, and those studies they are referring to, are that the Net Generation is not homogeneous, and all young people do not report using ICT very broadly or feel that they master ICT so well. The gap left by previous research concerns: the performance-based ICT skills (as opposed to self-reported) within the generation; to what extent Digital Native-like groups can be identified; and how ICT skills are distributed within and across different groups.

The following sections reproduce in brief some studies that are of special interest for the present study.

ICT use patterns

Different groups describing the heterogeneity within the young generation have been identified by surveying use habits, for example Kennedy et al. (2010) and Jones and Hosein (2010). Van den Beemt, Akkerman, and Simons (2011) surveyed actual use and opinions among 2,138 Dutch users, and presented a typology based on use patterns.

North, Snyder and Bulfin (2008), building on Bourdieu’s concepts of ‘habitus’ and ‘taste’, argue that the digital taste of young people is influenced by markers of class, which is something more than merely socio-economic status. Robinson (2009) noticed that respondents having good and high-autonomy access to ICT resources induced a more playful and exploratory stance towards online information seeking, an approach that Robinson labels ‘playing seriously’.

Helsper and Eynon (2010) concluded that it is not meaningful to define natives and immigrants as a dichotomy, but rather as characteristics on a continuum, and most importantly, being an immigrant is not a final state.

ICT access, skills and a digital divide

Previous studies agree that on average, young individuals use ICT intensively but skills are prevailingly measured using self-report instruments. Kvavik and Caruso (2005) reported that leisure time skills did not translate into the kind of digital literacy required in higher education, and in general, the results from several studies refute the assumption that the whole -generation would be very skilled in ICT (e.g. Kennedy, Judd, Churchward, Gray, & Krause, 2008; Kennedy et al., 2010; van Deursen & van Dijk, 2009; Helsper & Eynon, 2010; Bullen, Morgan, & Qayyum, 2011; van den Beemt et al., 2011; Kirschner & van Merriënboer, 2013).

Van Dijk (2008, p. 290) presents a recursive and cumulative model of access to digital technologies containing four types of access, marking the necessary steps to make use of digital technology.

Motivation to use a technology of some kind is the first step, with some resemblance to the digital habitus and taste described by North, Snyder, and Bulfin (2008). The next three steps express that, provided sufficient (2) material, physical and temporal access to ICT resources, the individual will be able to (3) develop her skills, which in turn will (4) empower her to use ICT resources for personal objectives.

Lack of material access expresses the so-called primary level digital divide (cf. Büchi, Just, & Latzer, 2016). Lack of skills and usage are distinguished as secondary and tertiary levels of digital divide. Neither access nor divide are to be regarded as dichotomous, but rather as operating on continua.

Skill differences have been discussed in terms of a digital divide (Buckingham, 2006, p. 9; van Dijk, 2008, p. 290). Büchi, Just and Latzer (2016) present a five-country study regarding differences in Internet use and an overview of studies confirming the persisting digital divide both between and within countries. Their own study, surveying five high-penetration English-speaking countries, showed that the digital divide has shifted from lack of access (first-level) to lack of use, that is, second or third-level digital divides.

Descriptions of performance-based ICT skills are scarce and have been called for (cf. Litt, 2013; Huggins, Ritzhaupt, & Dawson, 2014). Van Deursen and van Dijk (2009) measured what they call Operational, Formal, Information and Strategic skills, using performance-based tests. The so-called Net Generation scored relatively high in operational and formal tasks, but not significantly better in information and strategic skills compared to older participants.

Van Deursen and van Dijk (2009), and van Deursen et al. (2012), supplemented performance-based tests with observations, and conclude that observation can improve reliability but is too time-consuming to be used in large-scale settings (such as testing cohorts of university first years).

Gui and Argentin (2011) measured theoretical, operational and evaluation skills among Italian teenagers, and report good performance in operational skills, but poor performance in evaluation skills, although with some doubt regarding reliability.

Aesaert and van Braak (2015) report performance-based testing among sixth-graders using a walled (closed) test environment, which highlights a specific challenge: reliability of the tests can be improved by creating standardized, simulation-based tasks in a closed environment, but on the cost of authenticity. Creating similar tasks in an open environment appraises authenticity, but reduces reliability due to the constantly changing ICT environment, which in turn requires effort for updating the tasks to correspond to contemporaneity.

Research questions

Out of the studies cited in the previous sections, Horrigan (2007), Jones and Hosein (2010), and van den Beemt, Akkerman, and Simons (2011) identified groups based on use patterns, but did not measure ICT skills. Then again, van Deursen and van Dijk (2009) and Aesaert and van Braak (2015) measured performance-based skills, but not in relation to use patterns. There is an apparent research gap regarding performance-based (as opposed to self-reported) ICT skills and how skills relate to Digital Nativeness. Ultimately, this information will contribute to clarifying questions around digital divides.

Assuming the young generation is heterogeneous and considering the call for descriptions of the generational heterogeneity (cf. Kennedy et al., 2010; Litt, 2013; van den Beemt et al., 2011), the present research will explore what this heterogeneity looks like in terms of ICT use patterns and performance-based ICT skills. The research is guided by the following research questions:

1. What groups can be identified based on the users’ ICT and media practices?

2. What are the actual ICT skills among the young generation?

3. To which extent can members of the young generation be regarded as Digital Natives or Native-like?

It needs to be stated that an elaboration of the topic of digital divide is beyond the limits of this study, and the same applies for the vast discussion regarding digital literacies. Instead, this study focuses on the distribution of performance-based ICT skills on the levels of operational, formal, information and strategic skills (cf. van Deursen & van Dijk, 2009).

Method

In order not to blur skills and use practices, the present study set out to first identify groups based on use practice variables not connected to skills, and thereafter to explore performance-based ICT skills across these groups.

Participants and data sources

Data collection aimed at taking a snapshot of the students just entering the university with the ICT skills they carry along. Research data was collected during the introductory week among all first years entering some of the fourteen bachelor degree programmes (Table 1) at Arcada University of applied Sciences in Finland in the autumn of 2011 and 2012.

The university working language is Swedish, and it recruits students mainly among the Swedish-speaking minority population, but also attracts international students. This presentation draws upon data from a survey and the ICT Driving Licence level tests.

Hereafter ICTDL.

ICT, media and me

The objective of this survey was to collect data about the students’ background regarding ICT and media use. The survey was based on the Australian SETQ questionnaire (Kennedy et al., 2008; Gray et al., 2009). The SETQ was modified to correspond to the local context and contemporary ICT (e.g. 3G mobile connectivity), and also extended, such that the survey included items describing background, use frequency, and perceived skills regarding common software, use habits, and purposes for using ICT resources, gadgets and digital news media.

The survey was administered online with items grouped around aforementioned topics and portioned over 36 pages. Use frequencies and skills were registered on an 8-point scale, ranging from ‘Never used/poor’ (1) over ‘Once-twice a year’ (2) up to ‘Several times a day/excellent’ (8) (Figure 1).

Figure 1. Sample screenshot illustrating a questionnaire page containing items regarding use -frequency and perceived skill level.

The ICT Driving Licence

The ICTDL was developed at the University of Helsinki and used across all its faculties since 2006, at Arcada University of applied Sciences since 2008. The ICTDL was a compulsory part of the Introduction to University Studies course, and the level tests were used for low-stakes assessment of performance-based, basic ICT skills.

Based on level test scores, students chose an appropriate study path, that is, tuition or self-studies. The course was completed with an ICTDL examination test (grading passed/failed). The ICTDL level tests, study material and examination tests were published on the university’s online learning environment. As opposed to Aesaert and van Braak (2015), all tests were performed in authentic online environments.

The level test modules cover basic ICT topics (cf. the ST2L, Hohlfeld, Ritzhaupt, & -Barron, 2010):

1. Basic use of computers, for example files, software and hardware, but also internet and e-mail.

2. The ICT services at the university (excluded from analyses).

3. Modifying and presenting data, that is, basic office tools.

4. Information seeking, library catalogues and reference databases.

5. Information security and privacy protection.

Level test scores for modules 1, 3, 4 and 5 were used for analyses. Module 2 scores were omitted, since they do not reflect ICT skills expected prior to entering university. Since the constantly expanding web and communication topics were included in module 1, it was more comprehensive than the other modules.

Van Deursen and van Dijk (2009) note that ICT skills tests seldom go beyond ‘button knowledge’ and operational skills, but on this point, the ICTDL had some strengths. Each of the five level tests contained four 1-point questions, measuring mainly operational and formal skills. Further, the tests contained two 3-point skill tasks, requiring both technical skills and higher-order competences (cf. Aesaert & van Braak, 2015).

The ICTDL was innovative in most of the dimensions suggested by Parshall et al. (2002, cited in Hohlfeld et al., 2010; cf. Gui & Argentin, 2011). The time-limited tests utilized extensive randomizing functions (items, attachments, order).

In order to enable automatic scoring and assessment of large student volumes, multiple choice (MCQ) or matching was used as response methods. Below are two sample items (somewhat shortened), illustrating module 1:

* 1p: You want to listen to a recorded lecture. To which port (see image) should you attach your headphones? [MCQ, image displaying a variety of plugs].

* 3p: Save the attached zip-file, containing files and folders, in your home directory. Sort all document files into the folder ‘Documents’, and all image files into the folder ‘Pictures’. How much space do the picture folder files require? [MCQ, 11 options covering both kB and MB values].

Data collection and research data

Data collection was organized in connection to the compulsory ICTDL Level Test sessions, scheduled for all new students during the first week of the semester (cf. Kennedy et al., 2008; Lai & Hong, 2015). For the purpose of informed consent, the students were introduced to the objectives of both survey and tests and informed (orally and in writing) that, although level tests were compulsory, the survey was voluntary.

The students were introduced into the questionnaire and informed that support was provided if needed. Those who chose to participate first completed the survey ‘Me, ICT, and media’, and then the ICTDL level tests, so that the results in the level tests did not influence the students’ self-assessment of their ICT skills (cf. van Deursen & van Dijk, 2009). Both the survey and the tests were administered online, and set up so that responses were stored as the respondent proceeded through the survey/test.

The questionnaires were distributed by individual e-mails containing a unique link to each respondent’s questionnaire. Among the two cohorts, 916 students completed the survey and/or the test. After data collection, the data sets were merged and anonymized.

Table 1. Total sample and present subsample.

Science categories Total sample Present study subsample
Degree programmes N female % portion N female % portion
Soft-applied science base 267 85.4 % 29.1 % 190 91.6 % 26.6 %
Nursing (dom+int)*) 150 84.0 %   95 91.6 %  
Occupational Therapy (dom) 35 94.3 %   33 97.0 %  
Social Services (dom) 82 84.1 %   62 88.7 %  
Mixed science base 422 61.4 % 46.1 % 343 62.4 % 48.0 %
Business Administration (dom+int) 217 54.8 %   152 54.6%  
Emergency Care (dom) 37 59.5 %   36 61.1 %  
Physiotherapy (dom) 53 67.9 %   49 69.4 %  
Sports and Health Promotion (dom) 55 63.6 %   52 61.5 %  
Tourism (dom) 60 78.3 %   54 79.6 %  
Hard-applied science base 227 24.2 % 24.8 % 182 23.6 % 25.5 %
Distributed Energy Systems (dom) 58 12.1 %   53

11.3 %

 
Film and Television (dom) 66 42.4 %   57 42.1 %  
Information & Media Techn. (dom) 60 10.0 %   48 10.4 %  
Plastics Technology (dom+int) 43 32.6 %   24 33.3 %  
Total 916 59.2 % 100.0 % 715 60.3 % 100.0 %

*) dom = domestic students. int = international students

The international students (14%) were deemed too few and too diverse (32 nationalities) to be used in comparisons, and were therefore omitted. The average age among domestic students was 22 years, with 16 cases born before 1980, skewing the age distribution. These cases were also deemed too few and diverse (professionally active, family, i.e. non-typical students) to serve analysis, and were therefore omitted.

Thus, the analyses were performed on a rather culturally and ethnically uniform subsample of domestic students born after 1979, who had completed both the ‘Me, ICT and media’ survey and the level tests (n=715, Table 1). While reducing the amount of confounding variables, the sample uniformity may be regarded an advantage.

The resulting subsample was slightly female dominated especially within so called soft-applied sciences. Within most degree programmes, the gender distribution deviated from sample total. Computer, smartphone and internet coverage was close to 100%, and the medians for computer, mobile phone and internet exposure varied between 10 and 12 years. For the survey and test items used in the present study, the completion rate was 97.8–100%.

Degree Programme categorization modified from Becher (1994)

Analysis methods

Data analysis follows in three steps: 1) user clusters are identified based on ICT use patterns, 2) ICTDL level tests are subject to a descriptive analysis, and finally, 3) results from previous steps are joined to analyse performance-based ICT skills within and across clusters in order to assess, which clusters justify for being regarded as Digital Natives based on both ICT use and ICT skills. For statistical tests, 0.05 was used as threshold for significance.

The survey ‘Me, ICT, and media’ included 55 items describing: use frequency and (self-reported) skills regarding computers (10), web activities (26), mobile phone activities (11) and news media (8) – see Figure 1 for a sample page of the survey. In previous studies, -Helsper and Eynon (2010), Jones and Hosein (2010), van den Beemt et al. (2011), and Thompson (2013) used exploratory factor analysis to generate subscales.

In the present study, however, the aim was to create use pattern subscales so that they serve cluster analysis by expressing each use pattern as distinctly as possible. Therefore, the choice was made not to compute the subscales as factor scores, since that would cause cross-loading items to reflect on two patterns (cf. ‘Patterns of technology-based activities’).

Instead, as demonstrated by Kennedy et al. (2010), Thompson (2013) and Büchi et al. (2016), pattern subscales were created by combining conceptually connected items. The subscale scores were then computed as unweighted averages of item values, but only when a required number (x) of valid values were available for each case (see MEAN.x, SPSS, 2016). ‘Never used’ was treated as valid values since they supply relevant information for forming clusters (Table 2).

Clustering is about grouping cases by similarity, that is, minimizing within-group variance and maximizing across-group variance (Bailey, 2005, pp. 889–890). The Two-step Cluster Analysis method available in the statistics package is designed to reveal natural groupings within large data sets (SPSS, 2016). Thus, Two-step Cluster Analysis was used to create the clusters using the use pattern subscale scores as input variables.

The level tests scores were analysed regarding overall descriptives, and One-Way and Welch Anova tests were used to assess if the score means differed significantly across clusters.

Results

Use patterns and user groups

Patterns of technology-based activities

Table 2. Subscales based on frequency items (cf. Figure 1). The number after the subscale label indicates the number of valid values (x) required in the MEAN.x function syntax.

Subscale Cases % Cronbach-alpha

Predict. value in cluster analysis

Item- Total corr.
Items        
Versatile phone use (6) 98.2 .829 1.0  
use a mobile phone to browse the web       .678
use a mobile phone to send and receive email       .712
use a mobile phone to take digital photos or movies       .619
use a mobile phone as an MP3 player       .657
use a mobile phone to play games       .486
use a mobile phone as a personal organizer       .520
use a mobile phone for video calls       .440
Game playing (3) 97.1 .788 0.82  
play games on computer       .682
use web/LAN to play networked games       .694
play games on games console       .526
Sharing pictures and files (3) 98.2 .520 0.65  
use a mobile phone to send pictures or movies to other people       .339
use the web to share photographs       .366
use the web to upload and share MP3       .299
Digital news media use (3) 98.2 .648 0.62  
I follow the news using RSS feeds       .382
I follow the news on some newspapers' web sites       .449
I follow the news on some TV channels' web sites       .442
I use an app on my mobile phone to follow the news       .465
Blogging (3) 99.3 .760 0.47  
use the web to read other people's blogs or vlogs       .580
use the web to comment on blogs or vlogs       .668
use the web to keep my own blog or vlog       .554

Four of the use patterns were composed following examples in previous studies (pp. 91, 96), whereas ‘Versatile phone use’ was included in order to reflect smartphones being the new standard. The subscales showed good or satisfactory internal consistency (Table 2), and were further tested using Principal Component Analysis with Varimax rotation, where 15 out of 20 items single-loaded on the anticipated factor (KMO=.866, Bartlett's Chi--Square=4746, df=190, p<.001, 60.2% of variance explained).

±.32 used as threshold for loading (Finch, Immekus, & French, 2016, p. 143).

The five items cross-loading were conceptually logical, for example, ‘I use an app on my mobile phone to follow the news’ loaded on both ‘Versatile phone use’ and on ‘Digital news media use’.

User clusters

Clusters were generated using the use pattern subscales as input factors in the Two-step Cluster Analysis method that is capable of automatically selecting the number of clusters. In this case, a model containing five clusters was chosen since it contained rather equally sized clusters that differed clearly from each other regarding use patterns (Table 3, Figures 2–3).

Table 3. Five-cluster solution, distribution ratio 1.45.

Cluster N % Mean age Gender f/m %
Low-end users 136 19.8 % 21.2 76.5 / 23.5
Bloggers 161 23.4 % 20.8 87.6 / 12.4
Gamers 120 17.5 % 20.7 22.5 / 77.5
Communication-oriented users 159 23.1 % 21.3 56.6 / 43.4
High-end users 111 16.2 % 20.9 46.8 / 53.2
Total 687   21.0 60.3 / 39.7

Figure 2 illustrates which users engage in various activity areas: at least 80% of High-end users engage in all activity areas, whereas less than half of Low-end users engage in any activity area at all. Sharing pictures seems uninteresting for Low-end users, whereas all Gamers engage in some type of games. In general, Gaming is the most popular activity area, even among Low-end users. The largest differences appear within Sharing and Blogging.

Figure 2. Technology based activity engagement across clusters. The bars express the portion of users that engage in some activity within the activity area, i.e. response>1 (‘Never used’).

Figure 3 provides a more detailed picture: the stacked bars represent frequency scores computed without the MEAN.x condition, but excluding ‘Never used’. These frequency scores correspond to the questionnaire scale (Figure 1), and show for example that High-end users have an average activity level between ‘Once/Several times a week’, whereas Low-end-users lie between ‘Every few months’ and ‘Once a week’. Just as all Gamers engage in some type of games, they also do so more frequently than any other cluster.

Figure 3. Use frequency means across clusters. The stacked bars correspond to the questionnaire scale (Figure1; 2=‘Once-twice a year’; 8=‘Several times a day’).

The overall activity level is highest for Versatile phone use and Digital news media use (cf. nearly 100% smartphone coverage). The largest inter-cluster differences appear within Versatile phone use, Digital news media use and Game playing.

ICT skills

The overall descriptives of the test scores indicate that ICT skills are largely distributed, ranging over the whole scale from 0 to 10 in all modules. The ICT skills appear heterogeneous across modules, such that the students scored reasonably well in basic computer and internet use (module 1), with 63.1% demonstrating good skills. Regarding basic office tools (module 3), the mean score and the portion having good skills was lower (Table 4).

Table 4. ICT level test scores, overall descriptives.

Descriptives 1. Computers & internet 3. Office tools 4. Information - retrieval 5. Information - security
N 715 713 710 710
Mean 7.07 5.47 5.81 6.28
Median 8.00 5.67 6.00 6.50
Std. Deviation 2.84 2.90 2.15 2.15
Skewness -0.805 -0.192 -0.522 -0.409
Kurtosis -0.516 -1.084 -0.135 -0.304
Score distribution, %
poor skills < 4 a) 16.9 30.2 17.9 14.2
medium skills 4-7 a) 20.0 30.6 46.2 42.1
good skills > 7 a) 63.1 39.3 35.9 43.7

a) Cut-offs according to ICTDL specification, resembling the Finnish school grades where 4 is the cut-off for passed. Students demonstrating poor skills were recommended tuition.

ICT skills across clusters

ICT skills are part of the characteristics attributed to Digital Natives (p. 90), which calls for comparing ICT skills across clusters. A rather heterogeneous picture emerged across both clusters and tests. The score distributions suggest that the modules have different ability in distinguishing the clusters, possibly due to different requirement levels (cf. p. 94).

In modules 1 and 3, the scores are both largely (SD 2.84 and 2.90, Table 4) and differently (Figure 4) distributed, whereas in modules 4 and 5, the scores are less distributed (SD 2.15) and also show less inter-cluster differences.

Within clusters, the level test scores ranged over the whole scale from 0 to 10 in all clusters except among Gamers (Figure 4). The largest inter-cluster differences can be observed in module 1, where Gamers appear as the most homogeneous group (SD 1.93) as opposed to the Low-end users (SD 3.12).

A comparison across clusters using One-Way and Welch Anova tests indicated significant differences for all modules between several, but not all clusters (module 1: F(4, 333)=20.27, p<.001; module 3: F(4, 681)=12.18, p<.001; module 4: F(4, 678)=6.81, p<.001; module 5: F(4, 330)=23.81, p<.001).

Gamers and High-end users appeared as top clusters, whereas Bloggers and Communication-oriented users appeared as middle clusters, and Low-end users as the bottom cluster with consistently lowest scores (Figure 4, tables available from author).

Figure 4. Level test scores across clusters.

An effect size analysis (Ellis, 2009) between the groups showed that the effect size between most adjacent groups (as ordered in Figure 4) was small (0.2<Cohen’s d<0.5) but between other groups medium (0.5<Cohen’s d<0.8). Large effect sizes (Cohen’s d>0.8) occurred in module 1 between Gamers and Low-end users, and in module 5 between Gamers and Low-end users, and High-end users and Low-end users (effect size tables available upon request).

To conclude, within this sample, ICT skills are heterogeneously distributed both within and across clusters.

Discussion

Methodological limitations

Prior to discussing the results, some comments regarding the instruments and methods are in order.

The ICTDL was broadly used since 2006 but unfortunately never validated. However, the test items and topics, based on learning outcomes defined in the curriculum, were carefully considered, continuously evaluated and improved by an expert team. The 3-point items, requiring both knowledge and skills, were built upon a problem-solving process that would produce only one correct answer. For both 1- and 3-point items, responses were entered in unambiguous format.

The differences between modules 1 and 3 versus 4 and 5 (Table 4, Figure 4) illustrate the challenge of creating tests that measure higher level skills (cf. van Deursen & van Dijk, 2009; van Deursen et al., 2012). Indeed, validating ICT skills tests would require standardization (cf. Aesaert & van Braak, 2015) which, in turn, would be contradictory considering the constantly developing ICT environment (versions, logic).

Hohlfeldt et al. (2010) suggest that skills indicators should be ‘appropriate expectations of technology-related knowledge’ [for the intended user group], but with rapidly changing technology, ‘appropriate expectations’ must also constantly change. Tools for measuring ICT skills must be periodically updated (Huggins et al., 2014), and thus, after each (annual) update, a tool needs to be validated anew, which was not possible for the ICTDL.

Still, statistics from the preceding years, where the tests had been updated annually, show mean scores close to those reported in Table 4 and a similar distribution across skills levels (tables available upon request), both suggesting stable measurement. That is, each year, each cohorts’ ICT skills were about on the same level in relation to the current (updated) state of the art.

The SETQ (Gray et al., 2009) was jointly produced and refined by educational experts in three major universities, but unfortunately never validated. Updating the survey to conform to local culture and contemporary ICT and media environment ensured context fit. Most items showed a high response rate, indicating that the respondents understood the questions, possessed the information required to respond and answered truthfully (as in any self-report surveys).

The items included in SETQ were never designed with subscales in mind. Thus, it is not relevant to consider the reliability of the SETQ, but rather the reliability of the subscales constructed in the present study. The subscales showed good internal consistency values and where it was conceptually expected, the items correlated moderately, indicating that they still measured different aspects.

Technology use patterns and clusters

Previous research (cf. Büchi et al., 2016; Kennedy et al., 2010) provided support for composing the subscales on a conceptual basis and in the present study, the PCA (cf. p. 98) supported both the subscales per se, as well as constructing them as unweighted means. The use pattern subscales (Table 5) resemble the ones described in previous studies (Jones & Hosein, 2010; Kennedy et al., 2010; van den Beemt et al., 2011; Thompson, 2013).

The use pattern subscales inter-correlated to some degree, which seems conceptually reasonable: across clusters, use pattern activity levels show an obvious trend (Figure 2–3), with some exceptions (game playing, blogging). That is, since the subscales described patterns of technology-based activities, it is not far-fetched to imagine a latent, second-level “technology orientation” factor, influencing the use patterns.

Table 5. Use pattern subscales and their correspondence to previous studies.

Use pattern subscale Correspondence to previous research
Versatile phone use Technically Oriented use (Jones & Hosein 2010)
Standard mobile use (Kennedy et al. 2010)
Game playing Game-oriented use (Jones & Hosein 2010)
Performing (van den Beemt et al. 2011)
Gaming (Kennedy et al. 2010, Thompson 2013)
Sharing pictures and files Web-interactive (Jones & Hosein 2010)
Media sharing (Kennedy et al. 2010)
Collaborative Web Tool Use (Thompson 2013)
Digital news media use Reading news websites (van den Beemt et al. 2011, single item)
Blogging Web Interactive (Jones & Hosein 2010)
Interchanging & Authoring (van den Beemt et al. 2011)
Web 2.0 publishing (Kennedy et al. 2010)
Active Web Reading and Writing (Thompson 2013)

Both Kennedy et al. (2010) and van den Beemt et al. (2011) presented a four-cluster solution. In the present study, different solutions were tested, and as in previous studies, the High-end and Low-end users appeared in all solutions (Table 6). A solution with few clusters may turn out too coarse (cf. Kennedy et al., 2010), but allowing more clusters opens up for more nuanced information about the cluster characteristics. The clusters are naturally not identical with those described in previous studies, but share numerous similarities and serve exploration of inter-cluster differences.

Table 6. User clusters and their correspondence to previous studies.

Cluster Correspondence to previous research
Low-end users Basic or Irregular users (Kennedy et al. 2010);
Traditionalists (van den Beemt et al. 2011)
Bloggers Cluster 3 (Jones & Hosein, 2010);
Irregular users (Kennedy et al., 2010);
Traditionalists (van den Beemt et al., 2011)
Gamers Cluster 4 (Jones & Hosein, 2010);
Gamers (van den Beemt et al., 2011)
Communication-oriented users Cluster 3 (Jones & Hosein, 2010);
Ordinary users (Kennedy et al., 2010);
Networkers (van den Beemt et al., 2011)
High-end users

Cluster 1 (Jones & Hosein, 2010);
Power users (Kennedy et al., 2010);
Producers (van den Beemt et al, 2011)

The cluster model had a fairly balanced factor predictor importance (Table 2), a rather even distribution (ratio 1.45), and clearly distinguishable use frequency and skills profiles (Figures 2–3), all speaking in favour of the model.

The first research question set out to explore what kind of groups can be identified based on ICT and media use patterns, and cluster analysis produced five clusters (Table 3). The subscales turned out balanced regarding predictive value (Table 2), that is, all subscales contributed to cluster construction without any of them dominating.

Also, the input factor with weakest internal consistency, Sharing pictures and files, turned out relevant in distinguishing the clusters. The cluster solution distinguishes the clusters fairly well regarding both overall use activity (Figure 2) and use patterns (Figure 3).

Performance-based ICT skills

The comparison across clusters (Figure 4) showed that each module had a different capacity for distinguishing the clusters. With knowledge of how the questions and tasks in the modules were constructed, a possible explanation could be that modules 1 and 3 required both specific ICT knowledge and the skills to apply that knowledge in practical problem-solving (cf. p. 94).

This supported the use of MCQ items with one specific answer, measuring exactly while scoring all or nothing. Unlike, in the subject area of Information seeking and Information security, knowledge was not that simply structured, and therefore more multiple-answer items were used. For the students, this allowed for easier deduction and collecting scores, for example by excluding the most implausible response options and choosing the most probable options.

To conclude, not all subject areas lend themselves very well to computer-based testing in an open environment, at the expense of test accuracy.

A large part of the sample lacks the skills in using basic office tools (Table 4). This is problematic, firstly since these skills should be developed already in upper secondary as a preparation for higher education studies, and secondly since using those tools for producing texts with a scholarly approach is a central working method in higher education.

Responding to the second research question in the light of this sample, we can state that the young generation is heterogeneously ICT skilled. The ICTDL level tests scores ranged over the whole scale in all skill areas (modules), which supports previous research that has dismissed the assumption about all members of the young generation being net savvy (Helsper & Eynon, 2010; Kennedy et al., 2010; Kvavik & Caruso, 2005; van Deursen & van Dijk, 2009; van den Beemt et al., 2011).

Digital Natives among the young generation

Besides pronounced heterogeneity in performance-based ICT skills both across and within clusters, the results suggest that the competency profiles across clusters are on different levels (Figure 4), which allows us to discuss which clusters can be regarded as Digital Natives or Native-like based on both ICT use patterns (p. 99) and performance based ICT skills (p. 101).

High-end users (16.2%) engage frequently in a broad range of technological activities and perform very well in ICT skills tests. This cluster apparently holds users that correspond to the concept of Digital Natives.

Gamers (17.5%) show a high activity level on Gaming but varying levels in other areas. Gamers outscore High-end users on all except module 4, and a majority exhibit good skills in nearly all modules.

Communication-oriented users (23.1%) show an overall activity level higher than that of the Gamers, mostly due to Versatile phone and Digital news media use. Their ICT skills level is at medium level.

Bloggers (23.4%) are close to Gamers regarding overall activity but more active on Versatile phone use and Blogging. Bloggers exhibit medium ICT skills.

Low-end users (19.8%) are low in overall use, exhibit the lowest use frequencies except in Digital news media use, and have the lowest ICT skills scores.

Questions arise. Should Gamers, despite moderate use frequency, be regarded as Digital Natives due to their high level test scores? Should Communication-oriented users’ use frequency alone justify them for being regarded as Digital Natives?

The above assessment will not sum up into a clear-cut statement, declaring which of the clusters are Digital Natives. Instead, it seems obvious that the use pattern-based characteristics (pp. 97–98) are not exclusive, but rather overlapping and occurring to varying extents in the different clusters.

The same applies for the ICT skills that turned out to cover the whole scale in all clusters. Thus, all the Digital Natives characteristics, that is, both use patterns and skills, are widely distributed, which supports the view of Helsper and Eynon (2010) regarding Digital Nativeness as orientations or characteristics on a continuum.

To conclude, we can respond to the third research question by stating that High-end users correspond very well with what has been described as Digital Natives, and we can be confident in positioning them at the Digital Natives pole of the continuum. Gamers are positioned next to them, and Communication-oriented users somewhere towards the middle. Low-end users are positioned at the non-Digital Native pole of the continuum and the Bloggers next to them.

For those readers expecting a numeric answer, we may conclude that around 16% of this sample (High-end users) resemble so-called Digital Natives strongly, whereas around 18% (Gamers) can be described as Digital Native-like. Around 23% (Communication-oriented users) resemble Digital Natives weakly, and the remaining 43% (Low-end users and Bloggers) do not resemble Digital Natives. Thus, around a third of the young generation may be regarded as Digital Natives, which supports previous studies (pp. 90–92) that refute assumptions of a homogeneous Net Generation.

Conclusions and future directions

The subsample represents speakers of a minority language (Swedish), but in general, the background population is close to the national average. Still, the conclusions are not to be generalized, but serve as a contribution to the discussion about the heterogeneity of the young generation and the prevailing digital divide.

The results support previous studies regarding the Net Generation being just as heterogeneous as any other cohort and furthermore, also the clusters resembling Digital Natives contain users with rather poor ICT skills, which refutes the assumption of net savvy Digital Natives.

As the results above show, the ICT skills among the young generation are diverse, -limited and do not necessarily match the requirements in higher education studies, increasing the risk of a digital divide. The results indicate that we still suffer from a secondary and tertiary level digital divide (p. 91), that is, all users do not manage to develop their skills to such a level that they would be able to fully utilize the advantages of ICT, for example, in their studies. Acknowledging these divides is a necessary step for taking measures to overbridge them.

Future research regarding ICT usage and related background factors needs to pay attention to all the circumstances for use, that is, not only to material, temporal and spatial access, but also to how the surrounding culture supports ICT use and developing skills (cf. North et al., 2008; van Dijk, 2008, p. 290; Robinson, 2009; Gui & Argentin, 2011).

In future work by the author, Digital Nativeness will be further explored in connection to students’ epistemic beliefs and how they occur across different clusters within the young generation.

Acknowledgements

This research was funded by Föreningen Konstsamfundet, Koulutusrahasto and Svenska Kulturfonden. I am grateful for the support provided by Arcada through Filip Levälahti and all participating students during the data collection process, and for the support from the MEDA project through Matteo Stocchetti. Permission to use the SETQ instrument (Gray et al., 2009) is gratefully acknowledged. I am grateful also for the feedback provided by my supervisors Marita Mäkelä and Tere Vadén, my colleagues Peter Mildén and Nigel Kimberley, and by anonymous reviewers to an earlier draft of this paper.

https://www.helsinki.fi/en/ict-driving-licence

Litteraturhenvisninger

Aesaert, K., & van Braak, J. (2015). Gender and socioeconomic related differences in performance based ICT competences. Computers & Education, 84, 8–25. doi: 10.1016/j.compedu.2014.12.017

Anderson, S., & Balsamo, A. (2008). A Pedagogy for Original Synners. In T. McPherson (Ed.), Digital Young, Innovation, and the Unexpected (1st ed., pp. 241–259). Cambridge, MA: The MIT Press. doi: 10.1162/dmal.9780262633598.241

Bailey, K. D. (2005). Typology Construction, Methods and Issues. In K. Kempf-Leonard (Ed.), Encyclopedia of Social Measurement (pp. 889–898). New York: Elsevier. doi:10.1016/B0-12-369398-5/00108-0

Becher, T. (1994). The significance of disciplinary differences. Studies in Higher Education, 19(2), 151–161. doi:10.1080/03075079412331382007

Best, S., & Kellner, D. (2003). Contemporary Youth and the Postmodern Adventure. Review of Education, Pedagogy & Cultural Studies, 25(2), 75–93. doi:10.1080/10714410390198949

Büchi, M., Just, N., & Latzer, M. (2016). Modeling the second-level digital divide: A five-country study of social differences in Internet use. New Media & Society, 18(11), 2703–2722. doi:10.1177/1461444815604154

Buckingham, D. (2006). Is There a Digital Generation? In D. Buckingham, & R. Willett (Eds.), Digital Generations: Children, Young People and New Media (pp. 1–14). Mahwah, NJ: Lawrence Erlbaum.

Bullen, M., Morgan, T., & Qayyum, A. (2011). Digital learners in higher education: Generation is not the issue. Canadian Journal of Learning and Technology, 37(1), 1–24. doi:10.21432/t2nc7b

Ellis, P. D. (2009). Effect size calculators. Retrieved from https://www.polyu.edu.hk/mm/effectsizefaqs/calculator/calculator.html

Finch, W. H., Immekus, J. C., & French, B. F. (2016). Applied Psychometrics Using SPSS and AMOS. Charlotte, NC: Information Age Publishing Inc.

Gray, K. M., Kennedy, G., Waycott, J., Dalgarno, B., Bennett, S., Chang, R., . . . Krause, K. (2009). Educating the Net Generation: A Toolkit of Resources for Educators in Australian Universities. Retrieved from http://www.netgen.unimelb.edu.au/

Gui, M., & Argentin, G. (2011). Digital skills of internet natives: Different forms of digital literacy in a random sample of northern Italian high school students. New Media & Society, 13(6), 963–980. doi:10.1177/1461444810389751

Helsper, E. J., & Eynon, R. (2010). Digital natives: where is the evidence? British Educational Research Journal, 36(3), 503–520. doi:10.1080/01411920902989227

Hohlfeld, T. N., Ritzhaupt, A. D., & Barron, A. E. (2010). Development and Validation of the Student Tool for Technology Literacy (ST2L). Journal of Research on Technology in Education, 42(4), 361–389. doi:10.1080/15391523.2010.10782556

Horrigan, J. B. (2007). A Typology of Information and Communication Technology Users Pew Internet & American Life Project.

Howe, N., & Strauss, W. (2000). Millennials Rising: The Next Great Generation. NY: Vintage Books.

Huggins, A. C., Ritzhaupt, A. D., & Dawson, K. (2014). Measuring Information and Communication Technology Literacy using a performance assessment: Validation of the Student Tool for Technology Literacy (ST2L). Computers & Education, 77, 1–12. doi:10.1016/j.compedu.2014.04.005

Jones, C., & Hosein, A. (2010). Profiling University Students' Use of Technology: Where is the Net Generation Divide? International Journal of Technology, Knowledge & Society, 6(3), 43–58. doi:10.18848/1832-3669/cgp/v06i03/56097

Jones, C., Ramanau, R., Cross, S., & Healing, G. (2010). Net generation or Digital Natives: Is there a distinct new generation entering university? Computers & Education, 54(3), 722–732. doi:10.1016/j.compedu.2009.09.022

Kennedy, G., Judd, T. S., Churchward, A., Gray, K., & Krause, K. (2008). First year students’ experiences with technology: Are they really digital natives? Australasian Journal of Educational Technology, 24(1), 108–122. doi:10.14742/ajet.1233

Kennedy, G., Judd, T., Dalgarno, B., & Waycott, J. (2010). Beyond natives and immigrants: exploring types of net generation students. Journal of Computer Assisted Learning, 26(5), 332–343. doi:10.1111/j.1365-2729.2010.00371.x

Kirschner, P. A., & van Merriënboer, J. J. (2013). Do Learners Really Know Best? Urban Legends in Education. Educational Psychologist, 48(3), 169–183. doi:10.1080/00461520.2013.804395

Kvavik, R. B., & Caruso, J. B. (2005). ECAR Study of Students and Information Technology, 2005: Convenience, Connection, Control, and Learning. Boulder, CO: EDUCAUSE Center for Applied Research. Retrieved from http://www.educause.edu/

Lai, K., & Hong, K. (2015). Technology use and learning characteristics of students in higher education: Do generational differences exist? British Journal of Educational Technology, 46(4), 725–738. doi:10.1111/bjet.12161

Lee, L. (2005). Young people and the Internet: From theory to practice. Young, 13(4), 315–326. doi:10.1177/1103308805057050

Litt, E. (2013). Measuring users’ internet skills: A review of past assessments and a look toward the future. New Media & Society, 15(4), 612–630. doi:10.1177/1461444813475424

North, S., Snyder, I., & Bulfin, S. (2008). Digital tastes: Social class and young people's technology use. Information, Communication & Society, 11(7), 895–911. doi:10.1080/13691180802109006

Prensky, M. (2001a). Digital Natives, Digital Immigrants Part 1. On the Horizon, 9(5), 1–6.

Prensky, M. (2001b). Digital Natives, Digital Immigrants Part 2: Do They Really Think Differently? On the Horizon, 9(6), 1–6.

Robinson, L. (2009). A taste for the necessary. A Bourdieuian approach to digital inequality. Information, Communication & Society, 12(4), 488–507. doi:10.1080/13691180902857678

Siemens, G. (2005). Connectivism: A Learning Theory for the Digital Age. International Journal of Instructional Technology and Distance Learning, 2(1), 3–10. Retrieved from http://itdl.org/Journal/Jan_05/article01.htm

SPSS. (2016). SPSS 24.0. Chicago, IL: SPSS Inc., IBM Corporation.

Tapscott, D. (1998). Growing up digital: The rise of the net generation. New York. McGraw-Hill.

Thompson, P. (2013). The digital natives as learners: Technology use patterns and approaches to learning. Computers & Education, 65, 12–33. doi:10.1016/j.compedu.2012.12.022

van den Beemt, A., Akkerman, S., & Simons, P. R. J. (2011). Patterns of interactive media use among contemporary youth. Journal of Computer Assisted Learning, 27(2), 103–118. doi:10.1111/j.1365-2729.2010.00384.x

van Deursen, A. J. A. M., & van Dijk, J. A. G. M. (2009). Improving digital skills for the use of online public information and services. Government Information Quarterly, 26(2), 333–340. doi:10.1016/j.giq.2008.11.002

van Deursen, A. J. A. M., van Dijk, J. A. G. M., & Peters, O. (2012). Proposing a Survey Instrument for Measuring Operational, Formal, Information, and Strategic Internet Skills. International Journal of Human-Computer Interaction, 28(12), 827–837. doi:10.1080/10447318.2012.670086

van Dijk, J. A. G. M. (2008). One Europe, digitally divided. In P. N. Howard, & A. Chadwick (Eds.), Routledge Handbook of Internet Politics (pp. 288–304). NY: Taylor and Francis. Retrieved from ProQuest Ebook Central, http://ebookcentral.proquest.com