The Future of Work

Since the turn of the century, a seemingly never-ending series of advocates have told whoever will listen about the changing nature of work in the coming decades. Graphs such as this one I adapted from Levy and Murname (2005) seems to convey the typical message:

the future of work
Expected trends in 21st century work (Adapted from Levy and Murname, 2005)

 

In general, these advocates predict employers will value different skills than previously.

I cannot disagree that new skills are necessary. I see the work done by people around me (including that in my adult children’s generation, and that in my generation who are into our third decade in the workforce), and I see it requiring different skills than were needed by my parents and their parents. I do think many advocated have missed one important change, however.

Technology is certainly embedded (deeply) in our work. We use computers to interact and to access and process and create information. What I observe that appears missing on much of the literature about “the future of work,” is the important role of humans who are skilled at navigating the space between technology and human affairs.

Consider these examples:

  • A space planner in grocery stores who considers disparate information (some of it explicit, much of it implicit) to decide what products are sold in supermarkets and how much inventory of each product is kept. This individual uses sales information, demographics of store locations, industry trends, and conversations with local managers to plan what goes where on the shelves.
  • The manufacturing professional who designs products and models them on computer screens before building the products and determining their quality. They then understand the technical and human parts of the manufacturing system to ensure the processes become more efficient and produce items that meet customer’s needs.
  • The teacher who makes use of sophisticated graphing tools to allow students to play with graphs to see how they vary depending on the coefficients, constants, and exponents. Those teachers then help students translate the graphs into situations and measurements rather than spending time learning the rules and algorithms of graphing the equations.

Daniel Pink (2005) and others have reminded us the future of work is in the “things” humans can do, but computers cannot. Advocates for coding remind us that the future of work is in the ability to control the IT in our lives. I maintain the future of work lies in the ability to leverage IT to accomplish to tasks relevant to human needs.

 

References

Levy, F., & Murnane, R. J. (2005). The new division of labor: how computers are creating the next job market. New York: Russell Sage Foundation.

Pink, D. (2005). A whole new mind: Why right-brainers will rule the future. New York: Riverhead Books.

What’s Wrong with Coding?

Coding is a hot topic in my media feeds again… each year when the events designed to increase students’ experience writing code, it appears again. I get it, but I am distressed by educators’ (and philanthropists’) fascination with coding. We are looking to closely at the field of design and are missing the far more important task—manufacturing; or more simply, building stuff.

I recently had a conversation with an individual who brings learners into the space pictured to the right to build projects like that also pictured. He was an engineer before opening this space, and he told the story of his grandfather who used to say, “If you can’t build it, then you are not an engineer.”

As we get our students coding, let’s be sure we help them understand the nature of their work—they are in control of the computer, and it will do what the programs direct. Perhaps, we need a wise grandfather to remind students, “You can’t program it, then you are not a computer user.”

As we get our students coding, let’s also ask them to look at the monitor, mouse, computer case (inside and outside), and the tables, chairs, lights, windows, classrooms (both the physical and the organizational) to predict how they can be improved through good design. Then, let’s give students to tools and autonomy to redesign.

 

Three Questions and Measures for Assessment

“Assessment” has been an important aspect of teaching and learning (or perhaps more accurately, it has been a buzzword garnering much attention) for most of my career in education. Advocates for many positions (political as much as pedagogical) argue the role of assessment in achieving their vision, thus “fixing the broken educational system” once and for all.

The reality, of course, is that assessment is a much more sophisticated and nuanced part of the educational experience than is allowed by these advocates. Clearly, educators must determine what has been learned by the student, and (for many reasons) that learning must be reduced to a number of proxies; each proxy designed to capture and reflect what the student has learned.

In many ways, the summaries we use to assess students’ learning are an attempt to reify what happens in schools. We reason, “my methods must work, because I observed these changes on these assessments.” Educators do not admit, however, that our instruments are weak (“aligning your assessments with your instruction” is worthwhile, but dubious), subject to misuse (students don’t bother reading questions, educators’ biases affect their assessments), and we can be quite unskilled at understanding results.

The problem of defining and implementing appropriate assessment in schools is becoming more challenging as well. When print dominated, educators could be relatively certain of the skills that students needed. I have some of my grandfather’s college textbooks next to mine. We both studied science, which had largely changed in the 49 years between our graduation dates, but we both learned by reading textbooks and taking notes in those books. Today, students carry laptops, digital textbooks, and are as likely to use video to study as they are to use textbooks. “Becoming educated” has been a more sophisticated endeavor for my children than is was for my grandfather and me. My experiences as someone who has succeeded in both of these worlds are interesting, but the topic of another post.

Largely because information (and other) technology is changing how individual humans understand, how we organize our institutions, and the norms society holds; educators cannot predict with the same certainty what students must learn and which proxies are appropriate for assessment purposes.  This is a problem that has occupied my professional attention in recent years, and thanks to continued efforts to collaboratively design a comprehensive assessment method, colleagues and I have a much clearer, complete, and simple system for answering essential assessment questions.

First, we conclude three questions are relevant to understanding what matters in students’ learning, and each has equal value:

  • Does the student have the habits of effective learners and workers?
  • Can the students produce polished solutions to sophisticated problems?
  • How does the student compare to others?

These questions are answered in different ways, and all three comprise a reasonable and complete system for assessing students’ learning.

three assessment tools

 

In course grades, we answer the question “Does the student have the habits of effective learners and workers?” Consider the typical classroom. Over the course of months, students participate in a variety of activities and complete a range of assignments and tasks. Teachers’ make professional judgements about the characteristics of the students the degree to which he or she has mastered the material and is prepared to learn. Just as we do not always expect a supervisor to follow an objective instrument when judging workers’ performance, we should not expect educators to being completely objective.

Of course, as subjectivity enters the grading process, educators will find it necessary to defend decisions, which will motivate them to more deeply articulate expectations, observe learning, and record that learning. All of these are benefits of including educators’ judgments in course grades.

A performance is an activity in which we answer, “Can the student produce polished solutions to sophisticate problems?” Performances are those projects and products that working professionals would recognize as a familiar outcome and professionals would be interested in the motivation of the performance, the nature of the work, and the quality of the performance. Questions regarding a performance are best directed to the student because it was selected, planned, and carried out by the student.

Teachers do have a role in setting to context of a performance, guiding decisions, and facilitating the student’s reflection in the activity; but through a performance, a student demonstrates the capacity to frame and solve complex problems and complete complex communication tasks. While “projects” that are included in course grades contribute to students’ ability to complete these assessments, performances are typically independently constructed and are outside of traditional curriculum boundaries.

Tests have been at the center of intense interest in educational policy for the 21st century. The political motivation for these test have been challenged and is beyond the focus of this post. For the purposes of this essay it is sufficient to recognize that large scale tests (think SAT’s, ACT’s, SBAC, PARC, AccuPlacer, and the like) can be used to determine how a particular student did in comparison to all of the others who took that test.

A few details are necessary to complete the picture of what these tests show. First, standardized tests were used almost exclusively for these purposes in the 20th century. This century, standards-based tests have become more common. A standardized test is a norm-referenced test, which means the scores are expected to follow a normal distribution (bell curve) and an individual’s score is understood in terms of that distribution for comparison. When taking a standards-based test, and individual’s score is compared to those that he or she is expected to answer if the standard has been met.

Regardless of the exact nature of the tests, those interested in assessment of learning must recognize that these tests are administered for the purpose of comparing. Also, these tests are of dubious reliability. One of the fundamental ideas of all data collection is that measurements have errors, so a single measure taken with one instrument administered once is really meaningless. While the test results of a large group of students may allow us to draw conclusions about the group as a whole, a single student’s score cannot be used to draw reasonable conclusions about that student.

If we consider assessment as a method whereby educators can understand their program as much as they can understand students’ learning, then we see the three questions and the three types of assessments forming a meaningful and informative assessment system.

Educational Adjectives

Perhaps it is the many advertisements that have found there way through my spam filter recently. Perhaps it is that I have been reading (actually browsing) equal amounts of vendor-created content in trade magazines and peer-reviewed book chapters and articles from academic authors. Regardless of the origins, it is coming clear to me that there are three terms that seem to be synonymous to some writers, but clearly differentiated to others.

As the terms (student-centered, differentiated, individualized) have important implications for classroom practices and the nature of students’ learning experiences, this is my attempt to introduce some dimensions to define and differentiate these terms.

It must be recognized that the three terms have been introduced in recent years to encourage educators to recognize that students are different; therefore different learning objectives and different rates of progress through the curriculum are appropriate for different students. In general, we can recognize that the pace at which a student works through the curriculum can be determined by the student or by the teacher. Also the learning outcomes may be different for different students.

Together, these two dimensions lead to four types of instruction:

  • Teacher controlled pace and common objectives for all students can be accurately labeled as instructionist education. In this model (familiar to by schooling in the 1970’s and to my children in the 2000’s) teachers decide what is taught, when it is taught, and all of the students either keep up or fall behind.
  • Teacher controlled pace but student specific objectives are commonly called differentiated instruction. What is taught to different students (and how it is assessed) may be different for different students, but the teacher is the primary evaluator and judge of when to proceed to the next topics.
  • In classrooms where the students progress through the curriculum at their own pace is individualized instruction; the current interest in competency-based education and the mastery learning of previous generations are examples of this model.
  • Finally, in those classrooms in which students play an active role in defining what is to be studied, how it will be assessed, and when to move on to the next topic; we can properly label the curriculum student-centered.

While there are other relevant dimensions of teaching and learning, these terms seem to be defined through these two dimensions. Clearly, as well, there are situations in which each of these are appropriate (or inappropriate). In my experience, good teachers understand these differences and use the correct model for the purpose. Further, good teachers will ensure students have the opportunity to experience each during each course they complete.