Jump to content

2016 BOA San Antonio


Recommended Posts

Also to me it seems like the Indy super was graded on a higher curve of scoring than SA.

Avon for instance scores over a 94 and yet Flower Mound only a 93. Now granted, obviously

they're not being compared at the same competition...just felt the SA

scores were 'low'. A 93 for FloMo was low to win, a 90something for Leander is 'low' for that

performance. The Indy scoring makes more sense because things won't change much in a week

and yet watch... the Texas bands like Leander will magically inflate to at least a 95 or something.

You definitely can't cross examine the scores of these two comps today. I say Leander, Avon and

Tarpon Springs are your medalists if I had to guess right now.

Right

Link to comment
Share on other sites

That's the other con about the split draw based on ranking. When every band is on a similar level and goes all back to back, it can very easily compress scores. The reality is that the top echelon of bands in Texas is tighter than ever before, all the way down to about tenth place tonight, not to mention Bowie. Especially considering these 11ish bands all have different strengths, their scores are gonna come out tight.

Link to comment
Share on other sites

Flower Mound did what I thought they would, impressive win for sure.

 

The Woodlands...shocker of the day.  I loved the show design.  Redemption from area for sure.  

 

Grand Nationals Attendees:

 

  1. Leander
  2. Claudia Taylor Johnson
  3. Ronald Reagan
  4. Cedar Park

Johnson and Leander far exceeded expectations.  

 

First 2 out:

 

- Cedar Ridge

- LD Bell

Ok, I will admit I am not that versed in BOA, but why wouldn't Flower Mound be attending Grand Nationals if they won this Super Regional? 

Link to comment
Share on other sites

I'll say one thing, the music ensemble judge was extremely suspect in prelims.

 

He's Canadian.

 

Update: I couldn't resist the joke. David Orser judged music ensemble in prelims. It's not his first time judging, and considering how good all of these Texas bands are, I don't think he was entirely wrong.

Link to comment
Share on other sites

Wow, what a contest. Every band in finals was just phenomenal. We've seen some really good San Antonio finals over the years but this one was really impressive. Reminded me of 2013 when things were crazy close from top to bottom.

 

A few comments on finals.

Obviously this biggest surprise was Reagan And the woodlands completely swapping placements with each other from what everyone predicted. I didn't see that coming at all even after watching them both but congrats to the woodlands on a crazy late season surge.

 

Leander is really peaking with this show at just the right time, I expect big things from them in Indy. Same goes for CTJ who cleaned that show up nicer than I think any other band would be able to do. geez that thing is so hard both musically and visually.

 

I definitely feel like the scores were low balled quite a bit here. But that also seems to have been the case at San Antonio ever since 2013. No one has ever scored above a 93.5 since 2012 and I'm not entirely sure on the reason though I think I understand what it could be.

 

I completely expect all the grand nationals bound bands scores to inflate pretty drastically come next weekend.

Link to comment
Share on other sites

You hardcore historians can call me out if this is wrong, and maybe it's already been mentioned...but Churchill must be the first band to ever perform in 4 BOA regional finals in one year, right? Really impressive feat!

I think you're correct! I'd also love to point out that it seems they have a new head director, so hopefully this is only the beginning to another Churchill legacy!

Link to comment
Share on other sites

Well one reason they fell behind is that it's definitely arguable that Johnson and the Woodlands have two of the hardest combined music and visual books this year (I think Hebron is the only other in that mix) and just weren't going to be clean enough at Conroe.

 

At any rate, they'll absolutely be in finals on Wednesday and Saturday, but perhaps not as high was originally thought by many. For what it's worth, I thought they gave a performance worthy of cracking the top 7.

Link to comment
Share on other sites

Apparently it was music general effect that ultimately kept L. D. Bell out of finals.

 

Total music general effect scores (with overall total scores) from prelims:

 

14. Cypress Falls - 34.10 (84.55)

15. Cedar Ridge  -  33.80 (84.40)

16. L. D. Bell        -  32.80 (84.30)

 

I think another interesting aspect is the ensemble/individual splits they had:

 

MPI:18.20 MPE:16.20

VPI: 18.90 VPE:16.50 

Link to comment
Share on other sites

Looking at prelims vs. finals sheets is always so very interesting to me.

 

Vandegrift was a solid 3rd and dropped to 7th during finals.

Leander was 6th (!!!) and had a stellar finals run to finish 3rd. We saw the same flipping of the two at Austin. Leander was clearly first in prelims, then vandegrift swept in finals. Did the fireworks affect Vandegrift at all? One issue I'm seeing with them is that they are unbelievable at the execution, but lack in "boundary pushing" GE we see from other bands. I think that very well could be holding them back at BOA, but who knows. Very interested to see how they fare at state. Leander-- what a wicked show. Just so dang fun, so cool, so slick. Such a thrill ride. I could go on.

 

I am absolutely thrilled for The Woodlands, it's got to feel incredible to blow everyone's expectations out of the water. I think they and CTJ have really found the sweet spot between demand vs. execution... Kudos to those groups.

 

Can't wait for a flower mound video to pop up... ( ;) )

Link to comment
Share on other sites

I think another interesting aspect is the ensemble/individual splits they had:

 

MPI:18.20 MPE:16.20

VPI: 18.90 VPE:16.50 

 

I think this is an artifact of difference in personal judging emphasis - giving more points for certain things than for others. There are many instances among middle and top tier bands in prelims where the MPI and MPE are at least one point different. But we normally wouldn't give much thought to a one point difference between those captions. If, then, a one point difference (which goes underneath the radar) becomes common, then comparatively a two point difference isn't that much. It's just that it doesn't go underneath the radar. The same for VPI and VPE. Just look at James Martin's visual scores.

 

At this point I am just speaking generally, not necessarily to you... And I'm kind of going out on a limb here...

 

This is my defense of judging bias.

 

I hesitate to use the term 'bias' because people tend to read 'political bias' or 'unfair bias' or even 'prejudice' into that. But the reality is that a judge without biases cannot judge at all; he or she would have no prior notion of what a good band sounds or looks like, and thus would be unable to recognize and score any good music or visual production whatsoever. What's required is that a judge have competently formed biases, and BOA ensures that since they choose their adjudicators.

 

The adjudication handbook itself - implicitly, I mean - merely allows for a certain band's performance to be interpreted into a certain scoring range. It's not like the judges are under pressure to subjectively approximate as close as possible the one and only real, objective score that a band's performance deserves in a given caption, for there are many real, objective scores even in the same caption that a band can deserve for the same performance. It's not just that a judge 'sees different things than another might', or that we should put a margin of error after the sub-total. True, there can be error and judges do see different things, but these are mitigated substantially by having competent adjudicators: they know what they're looking for, and they look for it effectively most all of the time. The point is that there can be a legitimate range of difference in scores of the same caption, of the same performance, without actually making any judging error.

 

How is this? The reality is that there is no rigid definition of the terms used in adjudication - just a range of competent interpretations of those terms. This is different from the sciences and mathematics in toto. Any time there is not absolute, univocal, philosophical precision in the definition of a term (which is exceedingly difficult), there can be legitimately different opinions - for there are many ways of legitimately 'carving up' ambiguity. There are many ways to legitimately get to Columbus, Ohio if there are not watertight qualifications on how to get there. So just as in any community, there needs to be something of a gentleman's agreement to settle for a plurality of competent opinions, rather than enforcing an absolute, unanimous homogeneity. This we call the adjudication handbook, and it prevents chaos by sketching very broadly the norms of the community, without suffocating the future of the activity. And if we ever doubt that there are different opinions, we need only to look at the different products that different programs put out. Then also consider whether any group of more than one intelligent, educated person agrees on absolutely everything about a certain subject matter.

 

But we shouldn't be in any hurry to nail down with philosophical precision the meaning of these terms, for if the adjudication terms were static rather than dynamic, the entire future of marching band would be locked in to a predetermined way of doing things. There would be no 'paradigm shifts', no moving forward, no designers pushing the limits of what general effect or execution legitimately means on a marching field, no unexpected Flower Mound or CTJ productions. As with any human community (and the marching arts community is an extremely young one, I may add), there is a continual organic development in which the members of the community evaluate and reevaluate their norms and rules, as they come to a deeper or different understanding of themselves, as well as their goals and means of getting there. This simply cannot be transposed or codified into a short adjudication handbook, for that organic development still has more unpredictable achievements to produce. Moreover, even the handbook is always necessarily just a partial expression of the greater expanse of both written and unwritten, inchoate and intuitive community goals and norms that shape the activity.

 

The letter of the law then has to be open enough to allow for what we cannot predict - so that our expectations can be pleasantly surprised - and turn the rest over to the prudent, continual assessment of the community, even if this means we can't program judging robots. Unpredictability is simply ineliminable from any facet of human life, and the scoring accounts for that because the judges and designers are sensible human beings capable of handling surprising human achievement. Despite the appearance of nice boxes with numbers in them, adjudication is not really like science or mathematics at all, where there's only one true and one false response in the end. There's no harm in playing with the numbers, for they are valid, but they are not the only valid numbers as it would be in the sciences. Adjudication retains the competent plurality of the activity it governs, and this plurality will probably never go away. Why the numbers at all then then? We accept the numbers because it gives a real and valid - competently determined - resolution to the activity. It gives a prudent, informed answer as to what the ranking should be. For this activity, that is acceptable, for it still encourages high levels of achievement and produces life-changing experiences as a result - and (more than) fulfills so many other unwritten goals and expectations.

 

TL;DR: Different judging biases are good so long as they are competent (and they are). Because of this, we get to see awesome new designs every year!

Link to comment
Share on other sites

You bring up some intriguing points, because I think here is where we broach a terribly interesting topic: what is it that we are really judging?

 

BOA and UIL play off each other so well when the topic of biased judging comes into play because two biases, both valid in their own ways, can find themselves at odds. I love both judging systems and appreciate their differences, but they have to be recognized.

 

I believe that BOA approaches their judging with an emphasis on the artistic and creative advancement of Marching Band Arts. This is a lofty goal that yields many fascinating interpretations and stretches the more subjective aspects of art and taste. Innovation and boldness is encouraged and admired such that events are always dynamic.

 

UIL seems to me more cerebral and less interested in artistry past a commitment to teaching the fundamental art of performance in the moment. In that respect, UIL focuses much more on the interaction between student and teacher as well as the product of discipline, dedication, and technique. Great bands have both creative artistry and technique, but in a teaching environment, how much emphasis should be focused on the artistic sensibilities of the director?

 

Depending on how each of us personally judges a band, we will often draw different conclusions as to how we might rank a particular band, but it is useful to remind ourselves of how we might approach our opinions of the bands while also remembering that there are other, valid ways to think about any group in particular.

 

Just another tangent as we approach UIL 6A State and the judging disparities that are bound to occur.

 

I think this is an artifact of difference in personal judging emphasis - giving more points for certain things than for others. There are many instances among middle and top tier bands in prelims where the MPI and MPE are at least one point different. But we normally wouldn't give much thought to a one point difference between those captions. If, then, a one point difference (which goes underneath the radar) becomes common, then comparatively a two point difference isn't that much. It's just that it doesn't go underneath the radar. The same for VPI and VPE. Just look at James Martin's visual scores.

 

At this point I am just speaking generally, not necessarily to you... And I'm kind of going out on a limb here...

 

This is my defense of judging bias.

 

I hesitate to use the term 'bias' because people tend to read 'political bias' or 'unfair bias' or even 'prejudice' into that. But the reality is that a judge without biases cannot judge at all; he or she would have no prior notion of what a good band sounds or looks like, and thus would be unable to recognize and score any good music or visual production whatsoever. What's required is that a judge have competently formed biases, and BOA ensures that since they choose their adjudicators.

 

The adjudication handbook itself - implicitly, I mean - merely allows for a certain band's performance to be interpreted into a certain scoring range. It's not like the judges are under pressure to subjectively approximate as close as possible the one and only real, objective score that a band's performance deserves in a given caption, for there are many real, objective scores even in the same caption that a band can deserve for the same performance. It's not just that a judge 'sees different things than another might', or that we should put a margin of error after the sub-total. True, there can be error and judges do see different things, but these are mitigated substantially by having competent adjudicators: they know what they're looking for, and they look for it effectively most all of the time. The point is that there can be a legitimate range of difference in scores of the same caption, of the same performance, without actually making any judging error.

 

How is this? The reality is that there is no rigid definition of the terms used in adjudication - just a range of competent interpretations of those terms. This is different from the sciences and mathematics in toto. Any time there is not absolute, univocal, philosophical precision in the definition of a term (which is exceedingly difficult), there can be legitimately different opinions - for there are many ways of legitimately 'carving up' ambiguity. There are many ways to legitimately get to Columbus, Ohio if there are not watertight qualifications on how to get there. So just as in any community, there needs to be something of a gentleman's agreement to settle for a plurality of competent opinions, rather than enforcing an absolute, unanimous homogeneity. This we call the adjudication handbook, and it prevents chaos by sketching very broadly the norms of the community, without suffocating the future of the activity. And if we ever doubt that there are different opinions, we need only to look at the different products that different programs put out. Then also consider whether any group of more than one intelligent, educated person agrees on absolutely everything about a certain subject matter.

 

But we shouldn't be in any hurry to nail down with philosophical precision the meaning of these terms, for if the adjudication terms were static rather than dynamic, the entire future of marching band would be locked in to a predetermined way of doing things. There would be no 'paradigm shifts', no moving forward, no designers pushing the limits of what general effect or execution legitimately means on a marching field, no unexpected Flower Mound or CTJ productions. As with any human community (and the marching arts community is an extremely young one, I may add), there is a continual organic development in which the members of the community evaluate and reevaluate their norms and rules, as they come to a deeper or different understanding of themselves, as well as their goals and means of getting there. This simply cannot be transposed or codified into a short adjudication handbook, for that organic development still has more unpredictable achievements to produce. Moreover, even the handbook is always necessarily just a partial expression of the greater expanse of both written and unwritten, inchoate and intuitive community goals and norms that shape the activity.

 

The letter of the law then has to be open enough to allow for what we cannot predict - so that our expectations can be pleasantly surprised - and turn the rest over to the prudent, continual assessment of the community, even if this means we can't program judging robots. Unpredictability is simply ineliminable from any facet of human life, and the scoring accounts for that because the judges and designers are sensible human beings capable of handling surprising human achievement. Despite the appearance of nice boxes with numbers in them, adjudication is not really like science or mathematics at all, where there's only one true and one false response in the end. There's no harm in playing with the numbers, for they are valid, but they are not the only valid numbers as it would be in the sciences. Adjudication retains the competent plurality of the activity it governs, and this plurality will probably never go away. Why the numbers at all then then? We accept the numbers because it gives a real and valid - competently determined - resolution to the activity. It gives a prudent, informed answer as to what the ranking should be. For this activity, that is acceptable, for it still encourages high levels of achievement and produces life-changing experiences as a result - and (more than) fulfills so many other unwritten goals and expectations.

 

TL;DR: Different judging biases are good so long as they are competent (and they are). Because of this, we get to see awesome new designs every year!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...