July 15, 2013

THE WEEKLY BLAB

Volume 7, Issue 38 – July 15, 2013

 

Summertime…

And the living is not too easy.  Today’s my birthday and I’m now an ancient 58 years old, having caught up with Jill who got there two months ago.  What’s more, it’s too darn hot and someone needs to remind ex-Governor Perdue to pray for the rain to let up.  I was gonna be clever and quote Mark Twain’s line “Everyone talks about the weather, but nobody does anything about it”, but snopes.com informs me that Twain never said that.  He did, however, say: “If you don’t like the weather in New England, just wait a few minutes.”  So there.

 

 

Early Warning and Intelligent Agents

A topic that arose at the last Deans Council meeting has attracted some attention (and perhaps concern), both in Joel Fowler’s unofficial meeting minutes and in an email from Meg Dillon.  I thought I’d take the opportunity to give a little context to the topic.

As all of you should be aware, for the past several years, we’ve been carrying out an early warning system, wherein faculty report about student engagement at the end of the third week of classes.  Originally, faculty reported students as being “engaged” or “not engaged”, though a few years in we added a third response of “not present”, in order to satisfy federal reporting requirements related to awarding of financial aid.

Students who are identified as being “not engaged” are sent letters and emails, suggesting that they speak to their teachers, consult their advisors, seek help in the ATTIC, and other such general advice.  Some faculty have used the early warning system proactively, telling students on the first day of class how important it is to make a good start and that we’ll be evaluating their engagement at the end of the third week.  They have reported that good things have resulted—their students get engaged earlier, leading to good academic results.  The purpose of the early warning system is, obviously, to identify students early on who are getting into academic trouble, and to help them.  We were one of the first in the USG to have an early warning system, but quite a few places have one now as part of Complete College Georgia.

The discussion at the Deans Council was to look at the early warning system and discuss a few issues.  First, while the current early warning system tells us WHO is in potential trouble, it doesn’t tell us WHY.  Thus, every student who is identified as being unengaged gets the same general response.  A form used at California State University at Northridge was circulated as a possible model of how we might make the process a little more granular.  On the form, the faculty members check off one or more boxes, such as “attendance”, “tardiness”, “lack of homework”, “low grades”, etc.  So, IF we were to implement such a more granular process, faculty would check off the relevant box(es) that corresponded to WHY they marked the student as “not engaged”, obviously doing this only for students who they were marking “not engaged”.   That’s it.

The outcome would be that we could give students more specific advice as to what was going wrong and what they should do about it.  I asked if this was a way we might want to get more granular, or if there was some other way, or if we might want to do this just in certain courses (such as freshman gateway courses), or if we want to leave things alone.

Also, we discussed if we could use technology so that faculty wouldn’t have to do the early warning system report by hand—i.e., to make the job easier.  Our new learning management system, D2L, has intelligent agents built into it.  An intelligent agent is a logical IF/THEN statement.  An example would be something like “IF a student misses three assignment in a row, THEN send email message #3”.  Once the faculty member sets the criterion (missing three assignments in a row) and writes message #3 (‘Catch up with your homework’), the intelligent agent does the rest—whenever and for whichever student the statement is true, the email goes out automatically.  What was raised in the discussion was, for those faculty using D2L, we might be able to pre-write some intelligent agents that would correspond to the early warning system.  Faculty wanting to use them could modify them for their own purposes (one faculty member might want to make the threshold for “not engaged” be a grade of below 75 on the first quiz, whereas another might set the threshold at 65, and another might not use quizzes at all and use something else).  Once the intelligent agent is written, it does all the work and can be used term after term.

The discussion then went into the weeds a bit, with one person saying: “Most of our classes are face-to-face, so wouldn’t faculty have to enter attendance data into D2L if they wanted to use an intelligent agent to do the early warning grades?”  The answer is obviously “yes”—D2L has no idea who has shown up for your live class.  If you want to use attendance as a criterion for engagement, you’ll have to enter the attendance into D2L in exactly the same way as if you’re going to use grades as a criterion—you have to enter the grades into D2L.  Is this more work?  I don’t think so—if you are currently using attendance as a criterion, presumably you’re taking the attendance and writing it down somewhere.  Ditto with grades or whatever other criterion you’re using.  Instead of writing them in a physical gradebook, you’d write them in the gradebook in D2L.  Seems the same to me.

We then started arguing about a technical matter: whether the intelligent agents in D2L (as they are currently set up) can tell you how much time an online student has spent in D2L for a particular course, with some thinking the intelligent agents can’t do it, and some thinking they can.  (Fact is, they can.)

In any event, no one is being forced to use D2L for early warning.  It’s an idea I think that we should explore, since it might make some faculty’s lives easier.  If entering attendance or grades into D2L is too big a deal, they can continue to use the current system.

There meeting also featured a disagreement between me and one of the deans.  The dean argued whether we should change what we’re currently doing regarding the early warning system if we don’t know if the current system is doing what it should—we should gather data first to see if it is indeed, for example, increasing graduation rates (if that was indeed its purpose).  I argued that we know what the early warning system is supposed to do (identify students who may be in trouble), and that is be impossible to measure if the early warning system leads to higher graduation rates.  Our graduation rates are indeed rising, but you can’t attribute that to the early warning system or any other single factor—graduation rates are functions of multiple factors.   You can hear the disagreement (and the full meeting) on the podcast if you’re interested, by clicking here.

So, does this whole deal rise to the statement that was reported, namely: “a more detailed system would greatly increase the workload of faculty teaching face-to-face classes”?

IF we decided to do everything that was discussed (and bear in mind that no such decision was made), these would be the results:

1)    The “more granularity” part means that you’d have to check off a box or two telling WHY you marked any given student as “not engaged”, perhaps only in selected courses, and

2)    We’d be testing to see if intelligent agents in D2L could be set up to do early warning grades automatically, and you’d decide if you wanted to use them or fill in the early warning grades as you do now.

You can decide if that’s a big deal and if it would “greatly increase your workload” or not.  Feel free to post any comments you might have—I’d be interested in seeing them.

Finally, as to the issue of involving faculty as to whether we should change things—the consensus (such as it was) was that it is premature at this point.  We need to see how easily the intelligent agents can be set up in D2L (perhaps laying out a sample course shell to illustrate how it might work), and to propose how we might make the early warning system more granular (if we decide to go there).  When and if that’s done, we’ll bring it to the faculty for input and comments before implementing anything.

Report From RACAA

As promised last time, here’s a report of some of the highlights from the recent RACAA meeting, held July 7-9.

Sunday night, the after-dinner speaker was Amanda Seals, Executive Director of Government Relations for the USG.  She mentioned that the legislature is very politically concerned, due to potential primary challenges from within their own party.  She noted that all bills that weren’t acted on last time are still active, including bill to allow the carrying of guns on campuses.  [A comment from the Student Affairs side on Tuesday was “The nuts are getting nuttier and now we’re going to arm them.”] Another bill that is of concern is to allow the technical colleges to become community colleges.  The USG is following both bills closely, and working with legislators to represent our views.

Monday morning, Senior Vice Chancellor Houston Davis spoke about Complete College Georgia.  He noted that a 3% increase each year in undergraduate degrees would take us from 41,000 to 62,000 degrees by 2025, resulting in a total of 147,000 additional degrees.  The new funding formula will have at least one variable related to number of graduates (based on that the number of graduates should increase by 3% per year on each campus), and that other factors may include improvements in graduation rates, numbers of students who reach 30/60/90 credit milestones, and improvements in graduation rates for students from economically disadvantaged backgrounds and those who needed remediation.  He noted that CCG outcomes should emphasize K-12 partnerships; college access, affordability and value; new, flexible, and affordable pathways to degree completion; support for at-risk populations; and maintaining and improving the quality of teaching and learning.

He also talked quite a bit about MOOCs, noting that they are another tool in the teaching “tool chest”, and we need to be aware of the current models.  Private developers want to disrupt the current educational model in favor of open source and open learning objects.  Their goal is to drive down cost of education and to make a profit doing it.

MOOCs are being looked at in terms of credits, course sequences, supplementing instruction, accelerating the path to a degree, and their ability to deliver full degrees and certificates.  MOOC conversations are changing on a daily basis, and the pace of change is accelerating.  He asked: “Can we use MOOCs in a cost effective manner to improve access and accelerate degree completion while maintaining quality?  They are a
disruptive innovation that challenges the fiscal model and core components of our enterprise.  They create a tension between student-centered and institution-centered worldviews.”

The initial USG thoughts about MOOCs are that we need to look at them from the following vantage points:

  • Can they be used as an extension of the USG’s commitment to affordability, quality, access, and completion?
  • We will need increased policy flexibility and options for campuses to experiment and be innovative.
  • We can lead in the creation of complementary pathways to credit and credentials.
  • We need to explore the implications of our quality review processes and general academic program approval/review mechanisms with respect to MOOCs.
  • We will partner with other systems and states in developing some models.
  • There is an opportunity for the USG to be a leader in using MOOC content for high quality, more affordable, course and program opportunities.

Curt Carver then spoke about what was necessary to enable student success through MOOCs.  Lots of areas need to be present for a MOOC to be successful:

  • Affordable resources, such as Galileo and the Georgia Knowledge Repository.
  • Student support, including use of intelligent agents to track student progress.
  • Networks—Peachnet gives us a good foundation
  • Assessment: We will need to identify testing sites.  Faculty governance and training will be issues.  We need to research how to build appropriate online assessment tools.
  • Analytics—There is no single standard currently in the USG.  Daytona State University has a good system we should look at
  • Mobile devices—a D2L mobile app is being deployed.  We need to rethink classroom design as mobile devices become more dominant.
  • ID Management—we need to partner with ATT and Verizon for verification and location tracking of online students.
  • Financial—the goal is to produce a $150/3 credit course.  The financial model to support such a goal is unclear.

The overall goal to reduce total cost of enrollment while increasing graduation rates and changing formula funding will be transformational to higher education.

Senior Vice Chancellor Davis summarized the next steps that will be taking place regarding MOOCs:

  • Supporting Georgia Tech’s work in this space (i.e., their MS in Computer Science delivered by MOOC).
  • Supporting the Next Generation Learning Challenge program redesign at Columbus State, and scaling the project.
  • Building on the Coursera-USG relationship.  We’re on the verge of a D2L MOOC partnership.  Developing ties with Udacity, EdX, and others.
  • A systemwide consortium will be developed to work on this.  Georgia State will coordinate.  Institutions may play one or more roles (lead institution, implementation partner, enrollment institution, expert assistance partner).
  • UGA and four other schools are developing a pre-calculus course MOOC.
  • President Rossbacher will chair the USG’s New Learning Models Advisory Committee.  Its charge will be to review and advise on key eLearning trends, critical uncertainties, and strategic uncertainties.

More RACAA stuff next BLAB!

Last Week’s Trivia Challenge

Last time’s trivia challenge focuses on the word “blue” and related friends.  Our winner was Marietta Monaghan, with a respectable four correct.  Here are the answers:

  1. Popeye’s main enemy.  Bluto.
  2. Classic Roy Orbison song.  Blue Bayou.
  3. He’s under the haystack, fast asleep.  Little Boy Blue.
  4. 1930 film starring Marlene Dietrich and Emil Jannings.  The Blue Angel.
  5. The two Beatles songs with “Blue” in their name on Magical Mystery Tour and on the White Album.  Blue Jay Way and Yer Blues, respectively.

This Week’s Trivia Challenge

Just to be fair, today’s trivia challenge focuses on the word “red” and related friends.  No looking up the answers now!  SEND ALL ENTRIES BY EMAIL TO zszafran@spsu.edu, since if you put them as a response on the BLOG, everyone will be able to see them!

  1. Almost got eaten by the big bad wolf.
  2. There’ll be no more sobbin’ when he starts throbbin’ his old sweet song.
  3. Cantankerous comedian, starred in Sanford and Son.
  4. British science-fiction TV show, about a mining spaceship inhabited by Dave Lister, Arnold Judas Rimmer, a senile computer, and an evolved cat.
  5. De facto national anthem of the People’s Republic of China during the Cultural Revolution.
Advertisements
This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

12 Responses to July 15, 2013

  1. bbrownspsu says:

    I see tension between granularity and speed, and my own reaction is to favor speed.

    I set up a homework assignment to be due before the engagement date and report as “not engaged” those students who either do not submit it at all or who give it a “lick and a promise” effort. This mechanism has the great advantage that it works equally well for on line and classroom students. If I allow 1-1/2 weeks to present enough material that students actually can do an assignment and allow them one week to do it, I’m right up against that three week deadline.

    For face-to-face classes, I know who’s inattentive and who’s always tardy, but I often don’t have names attached to those faces yet. I suppose I could embarrass them by demanding the names of those who come in late or who are sitting in the back, dorking around with their laptops. I’m not sure this accomplishes helping the students, and I’m pretty sure it does NOT accomplish treating them like university students and adults.

    So, before we change anything, I’d like to see a concise, written summary of what we’re trying to accomplish by having early warning. (I think I know, but I’ll bet if you ask five of us, you’ll get six answers.) Then I’d like to see some suggested indicators of engagement, or lack thereof, such as given above, but with definitions and mechanisms for measuring those indicators. Example: Is a student “tardy” any time after the professor has called the class to order, or after five minutes, or some other measure. What does it mean to be “tardy” in an on line class?

    Finally (whew! at last!) let me caution against automating ill-defined processes. The result is an ill-defined, but authoritative-looking, result. Or, in the vernacular, “garbage out.”

    • bbrownspsu says:

      I apologize for the lack of an introductory paragraph. I was, of course, writing about the engagement, or early warning, process.

    • spsuvpaa says:

      Hi Bob,
      Lots of data indicate that simply taking attendance makes a big difference in outcomes for freshmen courses. We’d like to think our students are adults (or well on the way there), but they’re really not, especially regarding issues of time management.

      I like your idea about choosing a few indicators of engagement, and defining ways of measuring them. IF we go forward with asking WHY the student is not engaged, we should make sure we do some basic definitions of what the thresholds are.

      –Zvi

      • bbrownspsu says:

        The bosses don’t let me teach freshmen any more. {grin}

        Seriously, though, I teach only 3000 and 4000 level courses. I suppose I could try taking attendance, although I’d have to decide what to do with the result. Up to now, I haven’t imposed an academic penalty for missing classes.

  2. dontknowspsu says:

    I hope Jill is not upset for you calling her age out! 🙂

  3. Bri Morrison says:

    I agree with Bob Brown’s sentiments above. Given that the “first” week of class is during drop/add period, I usually don’t start learning names or giving assignments until the 2nd week of classes. The students have a week to complete (to allow those students who work to have a weekend in there) and now they’re just turning the first assignment in during the 3rd week of class, which is when the early engagement report is due.
    I believe a better indicator comes at mid-terms; by that point in the semester I have learned most names, have multiple assignments and a test to base my judgement of student success or failure on, and can offer reasonable advice on what the student can do to improve in the class. Perhaps the granularity should be added then (boxes in addition to S and U) rather than at the early warning indicator. Or move the early warning indicator out by at least 2-3 weeks (making it the 5-6 week of classes).

    • spsuvpaa says:

      Hi Brianna,
      Problem is, the later in the term the notification to the student comes, the less time there is for them to do what is needed and “turn things around”. I think that most faculty use a combination of attendance, whether the student is participating in class, and the grade on a first quiz (or in your case, assignment) to determine the early warning grade. It’s hardly perfect, but it’s not bad. It’s been my experience that we can predict the outcome for the majority of students based on what we see in the first couple of classes. Has that been your experience too?

      –Zvi

      • Bri Morrison says:

        I think for me it takes at least 3 weeks before I can accurately judge where I think the student will land. Especially for freshman classes, the first couple of weeks can be quite hectic; even more so if they are adding/dropping classes. Many students returning to college after a break also need a readjustment period to understand the workload and how they’ll balance it with the other commitments in their lives.
        This is one area where I think online courses “beat” face-to-face classes. With an online course I can tell after 1-2 weeks if the students are logging in and doing their assignments. For the face-to-face classes (and certainly introductory programming) I have to teach them so much up front before they can even get the first assignment. I usually wait until after they’ve had their first lab (which is in the 2nd week of classes to prevent those who add the class the first week from missing that all important first lab) to assign the first assignment. It’s when they’ve turned in that assignment (now the third week of class) that I begin to see who might need the extra hand holding.
        Briana

  4. Joel Fowler says:

    First, speaking from my position as Faculty Observer, I’d suggest avoiding the term “minutes” in reference to the observer reports, even when modified by terms such as informal or unofficial. I’ve tried to consistently use the term “report”, and included the disclaimer, to emphasize that these are not minutes and are not intended to be minutes.

    Second, for those who don’t have the stamina to listen to 2 hours and 40 minutes of meeting podcast, I did prepare (in response to another request) where to find some elements of the discussion at http://podcaster.gcsu.edu/podcastdata/spsu/Channel_464421843/podcast_97274/97274.mp3 . The link doesn’t have time-stamps, so these are geometric:
    1) The engagement discussion begins around 40% of the way through, but is lengthy (~ 60 minutes).
    2) The “exchange” between the VPAA and a dean concerning methodology occurs just a little before dead center. It lasts about 10 minutes.
    3) It isn’t referenced directly in the BLAB, but I personally found it significant: the decision to require D2L usage by all faculty, regardless of the online nature of their courses, begins around 2/3 of the way through.

    Third, speaking just as a faculty member and not as observer, I have much sympathy and agreement with the dean who perceived a lack of objectivity (my term) in the discussion of whether an enhanced EWS is a good idea. I recommend listening to that part of the podcast, since the arguments are presented there better than I can present them here. But I do believe there’s plenty of data already in the system that could be gathered and analyzed concerning student performance, even if it isn’t perfect, if first we more clearly decide what we are trying to do, beyond an all-encompassing “identify students who may be in trouble”. It’s also worth pointing out, data-wise, that while the EWS has had faculty inputting engagement for quite a few years, consistent and regular communications to the students, based on the reports, are a much more recent addition. With such a short history, I think it would be good to spend more time seeing what we have, before we start tampering with it.

    But here I’d really like to focus on what could be learned from the faculty, if they were involved. We are the front line users of the EWS. But no systematic data has been gathered from the faculty on how they use the current system, what sort of triggers they use in the current system, how time consuming the current system is, what sorts of interactions it generates with students, how being identified as unengaged changes student behavior, how any changes in behavior translate into performance, other pluses and minuses faculty believe the current system has, how satisfied they are with the current system overall, how much work they think more detailed EWS data would be to gather and report, what they would like to see in a new system (if anything), how they might use a new system, etc., etc.

    I’m disappointed that working -with- faculty on these questions, and in a fashion that is more involving and less anecdotal than just inviting comments on an already lengthy blog post during summer, wasn’t the first course of action pursued. This would be more productive than declaring the inclusion of faculty at the table is premature on an issue that so directly affects them and their work, about which they can offer so much insight, and which is, apparently, important enough to consume nearly an hour of a Dean’s Council meeting.

    More specifically, regarding our current EWS, I’ll give my answer to some of these questions, even though this is just one person’s opinion based on what I do with the current system. The students I have that pay attention to being labeled “unengaged” mostly know what their issues are, whether it’s attendance or performance or study habits or something else. There are a very few that question being labeled unengaged and truly have no idea why. But both of these groups benefit from the face-to-face conversation I then have with them about their difficulties. I like the fact that being labeled unengaged, without details, triggers a personal conversation. It’s a discussion for another time, but I believe a big negative of the online tools is their potential for lessening face-to-face contacts with students that could take place. In any case, my sense of the EWS is that its true value isn’t in what it tells a student about how they’re doing (since most already know that); it’s in the personal contact that tells them that we have noticed their struggles, are concerned, and want to help.

    Sorry for the length. Not sure how many people read post-BLAB comments, particularly in the summer, but I hope I haven’t taxed your patience.

    • spsuvpaa says:

      Hi Joel,
      I think you’re over-reading a few things. First, discussing whether we want to gather information on WHY the faculty member is reporting a student is unengaged is independent of what we tell the student. If we knew that student X is someone who perpetually skips classes, we might (for example) notify the school advisor of that to call in the student for a discussion, but not change anything in the letter to the student, in order to keep the conversation between the student and the faculty member as it currently is (and you like). We might simply do it to have a better general understanding of what the major issues are that cause our students struggle (and in what order), and let it go at that.

      As to the working with faculty on these questions, we certainly intend to do that. In my opinion, sending out an open-ended “What do you think of EWS? Can we improve it? Should we change it? How? ” won’t get us anywhere–it’s too ill defined. What works best is when there’s discussion document with some specifics for people to react to. As you state, part of the document should be some data related to what effects the EWS has had. Since we don’t have those things yet, it’s premature to involve the faculty. IF we decide it’s worth pursuing, we’ll generate said discussion document and involve the faculty in the discussion. My inviting faculty to comment in the BLAB is hardly the end of the road.

      I appreciate your comments as to what you think works about the EWS and why.

      –Zvi

  5. Joel Fowler says:

    Hi Zvi,

    Just a few quick replies:

    You’ve made a fairly narrow response to just one specific I mentioned regarding the current system: “… discussing whether we want to gather information on WHY the faculty member is reporting a student is unengaged is independent of what we tell the student.” I’ll simply state that we shouldn’t presume the usefulness or non-usefulness of what we tell the student without studying the system more than we have at this point, and taking into account the interests of many faculty, not just mine or yours.

    The second part of your reply, “… sending out an open-ended “What do you think of EWS? Can we improve it? Should we change it? How? ” won’t get us anywhere”, seems to be replying to something you wish I had said, rather than what I did say, about involving faculty.

    Also, I’ll have to politely disagree with your presumption that “What works best is when there’s discussion document with some specifics for people to react to … Since we don’t have those things yet, it’s premature to involve the faculty.” I don’t, in general, agree with the notion that it’s best to move an issue down the field by having a small group draft a document, and then asking people what they think of something they had no part in creating. It can also create the appearance that the small group is controlling the agenda and assumptions under which the discussion will proceed. That aside, if the goal is to produce the best “discussion document”, why wouldn’t you heavily involve the faculty in its creation, both for their experience with the current system and their buy-in?

Comments are closed.