Jan 302016
 

Grades are calculated automatically in my grading rubric using attribute values assigned or calculated at each of the three levels of the hierarchy. So to explain how it works, I’m going to start with the lowest level and move up to the highest.

Before I do though, it’s perhaps worth noting that these calculations use the $Checked attribute. If checkboxes aren’t visible in the outline, you can show them by going to VIEW ==> USE CHECKBOXES. Or if you prefer, $Checked can be set as a key attribute in the p_rubric_component prototype.

Descriptors

Basic grade values are established here by selecting one of the available descriptors and ticking its checkbox. If you wish to change the default point value of a descriptor, you may do so by entering the new value in the key attributes. The rule presented in the previous post will update its $Name to reflect the new value.

Two things are worth noting:

  1. Once the grading is done, there should only be one descriptor checked per criterion. The final step in the grade calculation is a simple sum: if more than one descriptor is checked, the student will be awarded points more than once for that criterion, and their final grade will be higher than it should be.
  2. The criteria and final grade calculations that I use assume that each descriptor is evaluated by a point score between 0 and 10. If this isn’t the case for you, you’d have to adjust the calculations.

Criteria

The criteria-note calculations take the point value of a checked descriptor and multiply it by its criterion’s weight to determine the points awarded toward the final grade. (See note on weighting below.)

To generate my weighted grade, I need to multiply the assigned grade — stored in the selected descriptor’s $RubricPointValue — by the criterion’s weight — stored in as the criterion’s $RubricCategoryMax value. The simplified calculation that would seem to perform this multiplication (but doesn’t) looks like this:

$RubricCategoryPointValue=$RubricPointValue(child)*$RubricCategoryMax

This tells TBX to pull the point value stored in the child note (i.e. the descriptor), to multiply it by the criterion weight stored in the criteria note, and to store the result in $RubricCategoryPointValue.

However, there is a problem: each criteria has more than one child. This rule doesn’t care about that and simply calculates the weighted value of the first child, whether I’ve checked that child or not. To make my rubric work, I have to tell TBX that it should use the $RubricPointValue of the child I have checked and to ignore the rest.

Using Mark Anderson’s TbRef, I discovered that the way to do this is to use “sum_if(x, y, z)” which I found very intimidating at first but which I’ve decided is not as complicated as it seems. As I understand it, this command tells TBX to add up numerical values that match a set of conditions (or arguments) laid out in the parenthesis. There are three types of conditions possible, and I’ve labelled them x, y, and z. (I don’t know if all three are required but I use all three of them in my calculation.) What they stand for is this:

  • X tells TBX which notes to look at by indicating a location. For example, I could replace X with “all” and TBX would look at all the notes in my project, but for my rubric, I only need my calculation to consider “children.”
  • Z tells TBX which attribute to look at. In my calculation, I want TBX to see if a descriptor note is checked or not, so I use “$Checked” for the value of z.
  • Y tells TBX what value it’s looking for in the attribute it’s examining. Now I’m sure this argument allows the initiated to do incredible, magical things, but thankfully I only have to ask TBX to look for checkmarks, which is a boolean value — yes/no, true/false — and so, I need nothing more complicated than “$Checked==true”.

Combine these three conditions in a parenthesis, separate them by commas and then use them to update the equation presented above and you have:

$RubricCategoryPointValue=sum_if(children,$Checked==true,$RubricPointValue)*$RubricCategoryMax*0.1

This looks complicated enough to make my heart flutter and my stomach ache, but it works and in a weird sort of way is logical. You just have to work up step-by-step. (…and ask questions.)

Regarding the final term of the equation (“*0.1”): because I use values from 0 to 100 to define criteria weighting, I have to multiply the product of my calculation by 0.1. If I don’t do this, the final grade calculation will report a grade of 88/100 as 880/100. Defining criteria weight with a scale of 0-10 would eliminate this problem, but I prefer thinking of weighting as percentages, which is more natural with a 0-100 scale. Multiplying by 0.1 moves the decimal point to where it belongs.

The Criteria-level Rule

The Criteria-level Rule

This calculation is stored as a rule in the p_rubric_category prototype, which then automatically applies it to all criteria notes in all instances of the rubric.

Final Grade

In my rubric, all the heavy lifting happens at the criteria level. Once the weighted point values are calculated and stored in the $RubricCategoryPointValue attribute, all that’s left to do is to add up these weighted values. To do this I use the “sum(x,z)” command. In this command, X and Z function in the same way as they do above. No Y argument is necessary because the command isn’t evaluating a condition in order to determine which values to add: “sum(x,z)” adds all the values found at the designated location.

As a result, the rule that calculates the final grade is simpler and reads:

$RubricEssayGrade=sum(children,$RubricCategoryPointValue);

The result of this sum — which will always be a value between 0 and 100 — is the final grade of the essay stated as a percentage. It is stored in the $RubricEssayGrade attribute where it is used to set the rubric’s $Subtitle and $HoverExpression.

The Root-level Rule

The Root-level Rule

And that’s it for the grade calculations. In the next post, I’ll explain how I export the rubric as a comment sheet for students.

****

Note on weighting

If all the criteria were weighted the same or if the descriptors awarded points on scales that directly reflected differences in weighting, the criteria-based calculation would not be necessary. The root-level rule could use sum_if(x,y,z) to calculate the final grade directly. However, I choose to use a standard scale (0 to 10) to evaluate all criteria and then to assign the weight for the criteria in a separate attribute.

I prefer using multiple attributes for two reasons. First, the standard scale makes grading easier: I know “7” always means 7/10 (and not 7/15 or 7/20). Second and more importantly, having a separate attribute for the weighting allows me to adjust (and experiment with) the relative weighting of my criteria without out having to continually adjust the individual point values of dozens of descriptors. This makes the rubric much easier to adapt for use in new assignments.

 

 January 30, 2016  Hypertext Tagged with:
Jan 272016
 

For reasons I’ll leave obscure, this entry in my commonplace book is on my mind this evening.

Our culture tends to reward men who are jerks, and when they get together and work in concert, there’s often little you can do to stop them.

 January 27, 2016  Moments
Jan 262016
 

Late last year, I was flipping through some passages in some of Edmund White’s books (A Boy’s Own StoryHotel de DreamProust). I was also reading a bit about him, and I realized that I hadn’t read many of the novels he claimed as antecedents. I’d dealt with queer theory and criticism at the margins of my own research but for a variety of reasons had never systematically read through the major works or the corpus that served as its rough working canon. Curious, I sat down and put together an initial bibliography and began reading. Now, a couple months on I’m still reading, I’m revising and building that bibliography and, most importantly, I’m excited.

The past couple years have not been easy ones intellectually. When I settled into a job at a non-research institution with a non-liberal arts focus, I initially felt a sense of freedom: without the burden of teaching my research, I began to read with fewer constraints than I had in years. The very specific pleasures of picking a novel for its cover and reading it blind or picking based on a friend of a friend’s recommendations became more and more my norm. I could and did read anything. Unfortunately, I think this blog shows — without me meaning it to — that this got old quickly and that I’ve read with a fair amount of boredom for awhile now.

In part, I was reading a lot of books that weren’t very good. When I read ones that that were, I often lacked a context (or even a reason) for engaging with them in a meaningful way. So I was reduced to observing, noticing, and, when something was noteworthy, calling it out. But nothing stuck or built up. I was simultaneously struggling with my seemingly ever expanding and increasingly administrative responsibilities at work. Forced to choose between unsatisfying reading (and so nothing to write about) and complicated, “important,” problems at work, I slowly and without noticing devoted more and more of my mental life to helping to run a school.

I didn’t consciously turn to White’s novels for guidance but that’s what they offered by reminding me about the manner in which I’ve always read. For good or for ill, I’ve never been satisfied with random observation. Even as a kid, I always had what I called “my research projects.” I’d be curious about something and would go to the library and check out everything I could about it and would read until I felt I knew what I wanted to know. Then I’d move on to the next project. And there was always a next project because I was always bouncing from one thing to the next as my interests led me.

Sometimes my questions were simple and easily answered; at other times, complicated and involved. Once a very young me figured out what lips were. That was pretty easy. Learning Greek mythology — a childhood passion — took time. Curiosity was my guide not seriousness, and my curiosity always provided its own context and purpose even when I couldn’t put my finger on it at first: I once spent the better part of a year in my early twenties reading crap book after crap book about astrology and the tarot, wondering why on earth I was doing it, but keeping it up until I felt done. When I finally did and looked back, I realized that I’d just explored a highly developed and convoluted instance of archetypal interpretation that was distinct from Biblical exegesis. I found that interesting.

I’ve also always been an encyclopedic reader. I stumble upon a writer, become interested, and then read in a burst, often in chronological order, everything they’ve written up to that point. The first time I remember doing this was with Lloyd Alexander when I was twelve; the next was with Stephen King the summer turned I fourteen. Sometimes this led to great things: I discovered Faulkner and spotted the patterns I wrote a dissertation about by reading in the this way. Sometimes it didn’t: my summer of Stephen King turned me off him irremediably. The thing is though, that however random these bursts were — I often discovered these writers by chance — the bodies of works provided their own context. Operating as an oeuvre, they directed my thinking about my reading in the same way that my curiosity — expressed as a question — pointed my way in the library.

All of this may sound like ridiculous nostalgia but it’s not: I’m not yearning to return to some imaginary, childlike ideal. (Blech!) Rather, I’ve realized in the past few days that flipping through White’s novels last December was the beginning of a new curiosity-based project. My bibliography is me once again working through a corpus that provides individual works with a context and that the books I’ve read are already building upon each other. Seeing this I recognize that there’s something in this arrangement of things that fits with my disposition and supports the better angels of my nature. It’s a happy recognition because, looking back over the past year and a half of this blog, I realize how very boring it is to be bored. So good riddance.

 January 26, 2016  Reflections Tagged with:
Jan 252016
 

Our Lady of the Flowers CoverSomething of the poetry of this book is suggested by the scene of Divine’s death in its final pages.

In that scene, Divine pulls her watch from between her thighs, hands it to her mother. Their hands meet, rest together for a moment. Then as she lies there Divine releases from her bowels a warm lake of filth. She sighs, spilling blood from her mouth. Then, sighing again, she breaths her last.

Divine’s funeral occurs in the early pages of the novel, so her death has been long in coming. But when it arrives, it happens unexpectedly at great speed and is horrific and deeply moving. Sentiment is not however its purpose, and the imagery of the scene, as tightly stretched and as densely packed here as it is everywhere in the novel, alludes to grand histories of noble defeat while offering an ironic negation of the trinity. Divine’s dirt, blood and wind echo the divine abstraction, rooting it in the earthiness of her body.

Divine is an assemblage, a magnificent, poetic creation. So her death, a scene conjured by imagination and arranged by figures, is not an end. Immediately, the narrator-protagonist (a prisoner named Jean) anticipates the pleasure of imagining new stories for Divine and closes his book with an account of a wonderfully lewd letter she received from her pimp.

The book lives.

 January 25, 2016  Book Logs Tagged with: ,
Jan 232016
 

In this second post I’m going to go through the prototypes and agents I used to make my grading rubric. When I’m done, I’ll have explained everything that doesn’t have to do with grade calculations or export. (Subsequent posts will take up those topics.) The result is a long post that is predictably schematic. If you’re reading, I’ve assumed you’re interested in nuts-n-bolts.

Before I get started though, I should make clear that, in what follows, when I say “the rubric” I’m referring to a prototype that contains a two level hierarchy. Because my rubric is a prototype (called p_essay_rubric) it sits in the container where I store the rest of my prototypes and appears as an option in the assign prototype contextual menu.

The rubric is a prototype

The rubric is a prototype

When I assign this prototype to a new note, that note automatically duplicates the entire prototype hierarchy, providing an editable copy of the rubric that I can use to evaluate an individual student’s work. I will call these editable copies “instances of the rubric.”

I should also mention that the rubric relies upon seven user attributes, each of which is assigned to only one level of the rubric hierarchy as a key attribute.

User Attributes

The user attributes

Of these user attributes, three of them–$RubricCategoryPointValue, $RubricCategoryMax, and $RubricPointValue–are number values used to calculate grades. The rest hold either sets or strings.

The Rubric Hierarchy

The rubric is organized with three levels of hierarchy.

The Root (“The Rubric”)

The top level of the rubric hierarchy has three key attributes: $RubricEssayGrade, $RubricAssignmentName, and $RubricExportFileName. All of these are set automatically by rules, agents or on-add actions. The $HoverExpression is set to display $Subtitle. (A separate rule sets the $Subtitle to the final grade. See below.) The note’s $Text will be presented as a general introductory comment in bold italics on the exported comment sheet. It is blank in the rubric but can be edited in each individual instance of the rubric. If no comment is entered, the export will insert a stock intro. This top-level note is a prototype and is named p_essay_rubric.

Criteria

The second-level notes identify the criteria used to evaluate the essay. There is one note per criterion.

The criteria in my rubric

The criteria in my rubric

Each note’s $Text contains a brief description of the criterion. When the final comment sheet is exported, this text will be presented as an italicized introduction to the specific feedback. $RubricCategoryPointValue and $RubricCategoryMax appear as key attributes. $RubricCategoryPointValue is calculated by a rule, but $RubricCategoryMax is set manually and establishes the relative weight of the criteria in the calculation of the final grade. (A value of 5 assigns the criteria grade a weight of 5% in the final grade.) Criteria are assigned the prototype p_rubric_category.

Making Revision to the Criteria: Evaluation criteria will likely change when the assignment or the course changes, and so both the number of criteria and their descriptions are fully editable. However, as I have set things up, the grade calculations assume that the sum of the $RubricCategoryMax values across all the criteria will equal 100. To ensure calculations are correct, any changes to the number of criteria or to the max value of individual criteria should be accounted for in changes to other notes’ $RubricCategoryMax attribute. Although $Text descriptions of the criteria can be edited freely, once I have begun grading I do not touch them so that each student’s instance of the rubric will match all the others.

Descriptors

The bottom level of the rubric hierarchy contains descriptors attached to point values. You grade an essay by selecting one descriptor within each criteria by ticking it’s checkbox. $RubricPointValue and $RubricCompDescriptiveName are set as key attributes. Although each note has a default value for both attributes set by the rubric prototype, they may be edited freely. The $Name of each of these notes is set automatically as a composite of the key attribute values and will be updated to reflect any changes made. The note $Text will be exported as the principal feedback on the comment sheet, and so, descriptor texts are completely flexible.

Sample Descriptors

Sample Descriptors

It is possible to have as many or as few descriptor notes as you like for each criteria. There can even be different descriptors that are assigned the same point value. Descriptor notes are assigned the prototype p_rubric_component.

Prototypes

The rubric hierarchy relies upon four prototype notes:

  • p_student
  • p_essay_rubric
  • p_rubric_category
  • p_rubric_component.

Together these prototypes:

  1. recreate the prototype’s hierarchy of notes for each instance of the rubric;
  2. set key attributes in each instance of the rubric;
  3. set a variety of rules and on-add actions that are used to define consistent attribute values in instances of the rubric;
  4. distinguish between levels of the hierarchy, allowing agents to act upon specific sets of notes;
  5. set rules that calculate grades based on checkbox selections in an instance of the rubric.

What follows is a brief explanation of each of the four prototypes.

p_student

The p_student prototype is not a part of the rubric itself, but I always create instances of the rubric as a child of a student note. (In my course file, I keep one note per student as a record of ID number, email address, etc.) It makes sense to use on-add actions set by the student note prototype to fill in basic attribute values for the rubric.

The first on-add action links the instance of the rubric to a specific student’s work by having the rubric pull its value for $s_LastName from the student note:

$s_LastName=$s_LastName(parent)

By setting this value immediately, I avoid the situation where I forget to label a rubric and can’t figure out to whom the comments should go.

The second action creates an instance of the rubric by setting the prototype of the note I create to p_essay_rubric:

$Prototype="p_essay_rubric";

Finally, I set $RubricAssignmentName:

$RubricAssignmentName="Essay One (Moby Dick)";

This last action has to be updated each time the rubric is used for a new assignment. It could probably be automated somehow, but I haven’t taken the time to figure out how. I don’t mind changing the string manually when I move from one assignment to another. Together, these actions look like this on the action tab of the p_student inspector:

On-add actions for student note

On-add actions for student note

p_essay_rubric

I began this post by pointing out that my rubric is a prototype. This is that prototype, and except for its $Name, it and its descendent are identical to the instance of the rubric I use in the screencapped demonstration included in my last post.

The prototype hierarchy

The prototype hierarchy

This is possible because the attribute $PrototypeBequeathsChildren is set to true for this prototype. Setting this attribute to true means that when a new note is assigned this prototype, the criteria notes inside this prototype—along with their text, their rules, their on-add actions, etc.—will automatically be created inside the new note as well. Because student notes automatically assign this prototype to their children, all I have to do is create a new note as a child of a student note to create complete instance of the rubric.

[Update: if you are thinking about using prototypes to bequeath children in this way, it’s very important that none of those children are prototypes in their own right. Instead, assign the children prototypes that are outside the bequeathed hierarchy. The TbRef entry on prototypes can give additional info.]

This attribute allows prototypes to create notes in a container

Creating notes in a container with a prototype

In addition to setting key attributes, this prototype contains one non-calculation rule:

$Subtitle=$RubricEssayGrade+"/100"

This sets the subtitle to show the final grade as a fraction over “/100.” It could just as easily be set to display it as a percentage labelled “%”. I’ve also set the hover expression of this note to $Subtitle.

I’ll explain the calculation rule in the next post.

p_rubric_category & p_rubric_component

These prototypes are assigned to criteria and descriptor notes respectively and are relatively simple. They set key attributes and identify the notes in a way that can be queried by agents. In addition, p_rubric_category assigns a calculation rule. As with p_essay_rubric, $PrototypeBequeathsChildren is set as true for p_rubric_category. That’s it.

Agents

The final piece of the basic layout is a set of three agents.

a_set_rubric_ExportFileName

The most important of these agents sets the file name that will be used when the comment sheet is exported. The agent queries for the p_essay_rubric prototype and then performs this action:

$RubricExportFileName=$s_IDNumber(parent(original))+"--"+$s_FirstName(parent(original))+"-"+$RubricAssignmentName

In the demonstration I included in my previous post, this action produces the file name: 123456–Brian-Essay One (Moby Dick).

Here’s how.

First off, everything looks simpler if we ignore the parentheses for a moment. If we do, it’s clear that the opening number sequence is being reported as the value of $s_IDNumber and “Brian” is being reported as the value of $s_FirstName. I’ve formatted these two results to make them easier to read by separating them with an m-dash by adding:

+"--"+

I’ve used the same technique to place a dash between my name and the assignment name later on.

Now those parentheses…

I only figured out what to do here with help offered on the TBX user forums (link to thread). Using my agent’s action, I’m going explain what I learned there. To make things clearer though, I’m initially going to ignore everything from my agent except the ID Number.

Consider the following action:

$RubricExportFileName=$s_IDNumber

This action will cause the export file name assigned to a note to be the same as the ID number assigned to that same note. I can’t use this action in the rubric because the student ID number is stored in a student note while the export file name is stored in the rubric. To make the rubric’s export file name use the container’s student ID number, I need to point the action to the information I want in the student note. So I tried doing this by pointing the action toward its parent:

$RubricExportFileName=$s_IDNumber(parent)

Unfortunately, this doesn’t work either. Why? Well, agents work on the aliases of notes that they gather using their query (and not directly on the original note). So when my agent action pulled the ID Number from the parent note, it wasn’t pulling the information from the parent of my rubric. It was pulling it from the parent of the alias created by the agent, which is the agent. And the agent didn’t have any student ID information. What I needed was to point my action not to the parent of the alias but to the parent of the original note. To do this, I need this:

$RubricExportFileName=$s_IDNumber(parent(original))

Make the same changes to the first name attribute, and I end up with an agent that set the needed export file name by pulling information from three different attributes.

Action to set export file name

Action to set export file name

a_set Rubric Component Name & a_set_rubric_Name

The remaining two agents simply maintain note names. The first queries for the prototype “p_essay_rubric” and sets $Name using the export file name:

$Name=$RubricExportFileName;

The second updates the $Name of a descriptor to reflect any revisions to the point value or the descriptive name. It queries for the prototype p_rubric_component and then performs this action:

$Name=$RubricPointValue+”: “+$RubricCompDescriptiveName;

Because the attribute being set by this action is part of the same note as the attributes holding the information it is using, there is no need for parentheses this time.

Wrapping Up

So this has been a long post, and a boring one too I think, but hopefully it lays out the basic strategy I used to make the creation of a specific instances of my rubric more-or-less effortless.

Next time, I’ll explain how I’ve set up the calculations.

 January 23, 2016  Hypertext Tagged with:
Jan 202016
 

Poetry is a vision of the world obtained by an effort, sometimes exhausting, of the taut, buttressed will. Poetry is willful. It is not an abandonment, a free and gratuitous entry by the senses; it is not to be confused with sensuality, but rather, opposing it, was born, for example, on Saturdays, when, to clean the rooms, housewives put the red velvet chairs, gilded mirrors, and mahogany tables outside, in the nearby meadow.

–Jean Genet, Notre-Dame des Fleurs

 Genet Defines Poetry  January 20, 2016  Commonplace Book Tagged with: ,
Jan 202016
 

Last semester, I spent a few weeks creating an analytic grading rubric using Tinderbox. I don’t generally use rubrics, but they are very popular with some of my colleagues, and I thought I’d see if I could come up with one that could actually work for me and my students.

The rubrics I’d tried in the past were completely non-starters. For me, evaluation should involve plainly written comments about what works, what doesn’t and why. I also want to suggest how things could be improved. Paper rubrics are invariably too constrained and inflexible to support this kind of feedback.

Software tools seem as if they should provide the consistency of a rubric and the flexibility of individual comments. Yet, the systems I’ve tried—and I’ve tried both ad hoc and commercial options—have all been as unwieldy as the worst of the paper rubrics. Getting my comments out of the systems so I can hand them back to students is also too often a chore.

Whether I’ve come up with something better using Tinderbox is an open question, but I do have a working rubric and have learned a lot putting it together. So I’ve decided to explain how it works.

An Overview of the Rubric

To get started I think it would be useful to show the rubric in use, and so I’ve put together a demonstration file that is a stripped down version of my course planning project. It contains the rubric, a student note (my name, fake ID number) and nothing else. In the video below, I use this demonstration file to walk through the process of creating an evaluation sheet for the student’s essay, selecting the appropriate descriptions of their work for the various criteria and then reviewing the comment sheet that results.

I’m fairly certain that all of this will look fairly boring. This is a good thing in practice—a rubric that is fireworks will be a distraction and a lot of the utility of what I’ve made comes from its flexibility which I don’t really show—but alas, demure adaptability makes for bad video. The excitement will come (maybe?) in subsequent posts when I explain how I use rules, agents, and export templates to make everything work. For now though, you’ll just see the following:

  1. A note is created and then automatically converted to a rubric container and renamed.
  2. The rubric container is expanded to reveal the evaluation criteria containers. For each criteria, one child is selected as a comment by ticking it’s checkbox. In one case, the criteria grade is revised manually and the criteria titles are updated as a result.
  3. The main rubric’s key attributes now show the final grade for the assignment. The hover expression will also show the final grade.
  4. A general comment is added for inclusion on the comment sheet.
  5. The preview pane shows the comment sheet that will be exported. Both the general and criteria-based comments are organized using additional, organizational text. Everything has been properly formatted.
  6. The selection for the first criteria is changed and revised. The changes are reflected immediately in the comment sheet.

(There is no sound with the video. The silence feels weird and peaceful I think.)

What I Wanted

My rubric isn’t fancy, but it uses techniques that are basic enough for me to understand and are exposed enough for me to adapt on the fly. I’m sure there are more elegant ways to do some of what I’ve done. That said, what I built does what I hoped it would in the four areas that mattered the most to me.

First, I wanted to select comments not number grades. I did, however, want the rubric to translate my selections into numerical point values. I also wanted it to compile these values into a grade breakdown and to use them to calculate the overall grade for the assignment. I wanted both of these things to happen automatically based only upon the text I selected.

Second, I intended the comments I selected while grading to be given back to students as part of their feedback, and so, I wanted my rubric to be flexible enough to allow me to revise or add to the prepared text in order to respond specifically to individual students’ work. I also wanted to be able to deviate from the basic grade breakdowns when necessary.

Third, once the grading was done, I wanted my rubric to export document files containing comment sheets that I could return to students either electronically or in hard copy. I wanted these files 1) to compile the comments I’d selected and written and 2) to organize and format them in a useful way. I also wanted the resulting files to be named so as to allow bulk upload to my college’s course management system.

Finally, I wanted the rubric to be able to travel from semester to semester. In other words, I needed to be able to revise the criteria, descriptors and grade calculation quickly and easily in order to adapt the rubric to new assignments and new classes. If I had to dive under-the-hood regularly, I wouldn’t do it and the whole project would be a waste. I know myself.

The Plan for Posts

Over the next week or so, I’m going to post a short series that explains how my rubric works. I think I can do everything in four posts, so my planned progression is:

If I need to break one of these posts apart to have more room, I’ll update this list to account for the change. That way it can serve as a table of contents.

Finally, before moving on, I want to acknowledge that Mark Anderson’s TbRef and his responses to questions on the Tinderbox user forum made the difference between success and failure. Mark Bernstein’s “Actions and Dashboards,” which can be found in the Tinderbox Help menu, was my go-to resource for generating ideas and solving problems. I also relied heavily on other people’s questions and posts at the Tinderbox user forums and the backstage group. So thanks to all.

Next post: setting up the prototypes.

 January 20, 2016  Hypertext Tagged with: ,
Jan 192016
 

Love and MercyThis movie was a real downer. It didn’t help that (as I discovered watching) I don’t know the Beach Boys’ music well enough to fill in what’s happening based on the musical snippets. So I was left dealing with disjointed moments of abusive people damaging the main character.

I also found it odd that I couldn’t place the genre of this film. For large stretches, it like felt equal parts disease-of-the-week and rock-doc-bio-pic. By the end though, it was veering toward a believe-in-yourself-and-you-can-do-it, all-you-need-is-love inspirational tale à la Rocky et al.

Maybe I was just tired.

Jan 182016
 

What Happened, Miss SimoneI discovered Nina Simone in “Where Lies the Homo?,” an autobiographical, found-footage film by Jean-François Monet that moved me deeply when I first saw it and that I spoke about a few times at conferences but never got around to writing about. Over a long key passage in the film’s second half, Simone sings “Every Time We Say Goodbye” and together the music, the montage of images and the context they allude to are heartbreaking.

This documentary let me know who Simone was and, more importantly, let me see how much better she was than the moment I discovered her in. A real artist living fiercely in impossible circumstances with extraordinary people.

 January 18, 2016  Movie Logs Tagged with:
Jan 172016
 

Rogue NationThis movie mustered real surprises. It also felt like a spy story rather than an action film. So I enjoyed it.

I’m struck though by how sexless and how passionless the whole thing feels when I think back to it. It’s as if the movie can be read as an allegory for Tom Cruise’s body: finely tuned, machine-like, and operating at peak capacity. A spectacle of ordinary realities — age, fatigue, softness — denied.

 January 17, 2016  Movie Logs