Evaluation Workshop- Jan 14th 2009

Dawn and I attended the Evaluation Workshop organised by JISC at Maple House, Birmingham yesterday, 14th January 2009. Overall the event was very useful in that it helped us to focus on what evaluation means for the PC3 project.  I was happy that the Support & Synthesis (S&S) team  were comfortable with the idea that evaluating a project lasting four years means that an evaluation plan is likely to change over that time particularly given the experience of the chocolate cookie activity!

Something else that caught my attention during the day was the recognition that different stakeholders had different expectations from our project and that we need to ensure that our evaluation plans allow us to include activities that allow us to say something towards these different expectations.

Helen Beetham talked about the ‘power words’ in the original bids and  a need to be clear what these mean in the context of our respective bids. Early views on this for us certainly highlights words such as ‘coaching’ , ‘flexible’ approaches and ‘personalised curriculum’; feedback from project events run so far clearly show that staff have very different views about what these mean for them.

The discussions around ‘baselining’ a project were helpful for us as we are modelling current processes and workflows in order to demonstrate what they currently enable and where changes are needed in order to facilitate the radical changes to curriculum design inherent in our project. There was also a suggestion that one evaluation activity could be to contrast a typical output (subject specific award design over a limited time period with limited student choice) for the current processes against a typical output for the modified processes (generalised award design over a variable period with a wide range of choice).

There was also a timely reminder that each project is a part of the S&S programme and that the programme itself would be evaluated so it would be helpful to identify aspects of our evaluation plans that offered insight on the programme aims and objectives.

Sarah Knight closed the event  with a few key dates for the project and evaluation plans so perhaps I had better stop chattering here and make sure that we meet the first deadline for the draft project plans on February 2nd.


One Response to “Evaluation Workshop- Jan 14th 2009”

  1. Dawn Says:

    The evaluation workshop was a valuable insight into the variety of concerns and directions individual projects within the program are currently exploring. It was good to meet-up with fellow cluster members and share ideas and express opinions even if most this was focused on cookieness 🙂 . John has already covered some important insights to come out of the day and I would like to add a few more.

    One question that came up early on the proceedings was the difference between research and evaluation. I understand there has been some discussion on this on the circle site. I haven’t checked this out yet, deciding to voice a view prior to more perspectives. Form me both research and evaluation (both of which may sit within the each other depending on your philosophical view and methodology) are just tools (possibly tool boxes) to be used when required. One individual at the workshop (I don’t recall who) voiced that defining these wasn’t what mattered, and in most respects I agree with this. It’s the question you ask that will inform whether your process is more like one than the other. I say “more like” because there is no real dividing line between the two. As processes they often encompass many of the same tools which are never firmly fixed to one process or the other and don’t really determine which process it is that you are using. Both process are highly flexible and only have a guideline as to how they should proceed. The important thing is the question or questions that you are asking.

    John noted that there was an emphasis on stakeholder perspectives and in general there was some concerns expressed about the perceived value by stakeholders of project outcomes. Again, for me anyway, this comes back to asking the right questions. By gather stakeholder views we can frame evaluation questions so that the resulting evidence is in the same perspective as those views and can therefore stakeholders can easily identify value (or otherwise). I bring this up because PC3 has a large number of perspective stakeholders. Each of whom will require strong evidence of value in order to take up our flexible, personal, coached view on education.

    One other thing, on the subject of getting the question right, pointed out by Helen from the S&S programme, is that the question has to interest those that are asking it and doing the work. Just between the four main members of our team there is most likely some differences in how they view PC3 and it might be a good idea to bring this up at some point especially since things will have changed since the proposal was first developed.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: