Jump to content

2018 6A State


Recommended Posts

Why do I feel as if I am missing something here? Is there a decoder ring somewhere?

 

I think it's because people think those of us expressing concern and proposing solutions are just Round Rock supporters whining about what happened to Round Rock. They're missing the main point that most of us have made, which is that this should be regarded as a problem that can and does affect any band program in Texas at any time and shouldn't.

 

In the end, it's really not about Round Rock or any other specific band that has received wildly varying ordinals in a UIL competition. Round Rock's 20-ordinal spread between marching judges this year was just a shockingly clear example of what detractors have admitted HAPPENS TO MANY BANDS, EVERY YEAR, to one degree or another. That is exactly why we're urging directors to call for real change -- it's not just Round Rock and one undesired outcome (or even two, though that does establish a concerning pattern, as you pointed out earlier). It's about the integrity of the whole competition and the need for some kind of calibration and accountability of judges in the system moving forward. TxDragonDad has put it in clear scientific/mathematical terms several times now, and if we all take our emotions and biases about our own kids and programs out of the equation, it seems fundamentally clear that there IS a consistent and provable problem, and it SHOULD be addressed for everyone's sake in the future.

Link to comment
Share on other sites

 Frankly I haven't seen one of the posters even give a thought for the reversed outcome. Do you not care about that impact? Why is this just a one way street? Because most of the posters are from the Austin area?

 

One of my suggestions for similar circumstances in the future was to modify/extend the so-called "Bad Judge Rule" to address extraordinary spreads in ordinals between judges who are evaluating the same caption, which in this year's situation might have sent 13 bands through to finals and would not have cost the 12th-place band a thing.

 

I don't think anyone said TWHS didn't deserve to be in finals; in fact, several of us acknowledged repeatedly that every one of the 12 that went to finals absolutely 100% deserved to be there. In fact, there were about 17 bands by my count that could have been in finals and I would not have questioned their place there for a moment. TWHS puts on a consistently amazing show every year. I was in awe of how well your program recovered after Harvey last year. You could not tell by the end of the season that TWHS and many other Houston-area schools had gone through such a devastating natural disaster because the quality of the performances remained so high.

 

Again, this discussion isn't really about RRHS or TWHS or any other specific band or year or event -- it's about the entire UIL competition and judging system going forward. It may be in the directors' hands, but I'd say it's a foolish director who doesn't acknowledge the importance of having the parents', district's, and community's full support for and belief in the integrity of the competitions they attend.

Link to comment
Share on other sites

FYI, if Leander hadn't been saved by the "Bad Judge Rule" at Area, and they had been kept out of State entirely by one wonky judge, I absolutely would have gone to bat just as passionately for them as I would for my own band. They clearly earned a spot at State, and it would have been a huge black eye for UIL in my opinion if Leander had been denied the opportunity to compete there because of one person's whim. Area H was supposed to send 3 bands to State, but they sent 4. That was, in my opinion, the right thing to do.

 

I don't have a kid in the Leander band; I don't pay taxes there; LISD isn't even in the same Area as my kid's band -- in fact, they're a rival, and it would have benefited my kid's band placement-wise if Leander hadn't been at State this year. But I would have considered it a travesty if Leander weren't at State, and I would have unequivocally supported their director if he had registered a complaint.

Link to comment
Share on other sites

All excellent points. I have certainly not forgotten how Leander barely got in at Area with the Bad Judges Rule. I’m just not well versed enough in all of this to understand it all. But I am listening. And learning.

 

Hopefully directors are reading this site as well and will vote for changes, if that is indeed how it works (which I have no reason not to believe). And I feel pretty sure conversations are happening in band halls around the state. I’m just thankful to have this platform in which to learn from, especially now that I no longer have a child in the program.

Link to comment
Share on other sites

So some questions to help me better understand the process:

Does it make sense to have the visual judges at the 30 (or 35?) yard line on opposite ends? What is the origin of that placement?

Do the judges deliver just a raw score, or do they send the ordinals to tabulations?

Should ordinals be continued or use raw score summations like pretty much every other organization?

Should judges for larger contests (25/30+ bands) have training on how to manage scores and ensure more reasonable placements?

 

I'm sure there are many other questions that can help better understand the process. It is the process that needs to be understood, reviewed, and potentially changed in order to address shortcomings. Leave the emotions out of it. Emotionally driven arguments will get no changes in the process.

 

I have a longer list of question, too.   Most importantly, are the judges required to score of a 0-1000 point scale?  and how are ties in score resolved for each judge's ranking?   

 

I can nearly automate all of this for UIL but I need to know and understand the rules.     

 

I have been researching statistical methods of identifying outlier data (standard deviation, Tukey approach, interquartile approach) and methods of dealing with them (removal, modification, etc.).    I do not believe exclusion (which is by far the most common statistical method) makes sense give the small number of judges.    Rather, I am researching common methods of "normalizing" outliers.     In my opinion, they should not be fully neutralized, but rather be moved closer to the norm.   How far and how much is what I am trying to determine based upon other's research and accepted statistical methods.   

 

Also, I think that all of this needs to be applied to the raw scored prior to applying individual judge ranks.    Despite the clear emotional aspect of this issue, I want to present a logical and non-biased solution option which prevents future issues for all bands.

Link to comment
Share on other sites

FYI, if Leander hadn't been saved by the "Bad Judge Rule" at Area, and they had been kept out of State entirely by one wonky judge, I absolutely would have gone to bat just as passionately for them as I would for my own band. They clearly earned a spot at State, and it would have been a huge black eye for UIL in my opinion if Leander had been denied the opportunity to compete there because of one person's whim. Area H was supposed to send 3 bands to State, but they sent 4. That was, in my opinion, the right thing to do.

 

I don't have a kid in the Leander band; I don't pay taxes there; LISD isn't even in the same Area as my kid's band -- in fact, they're a rival, and it would have benefited my kid's band placement-wise if Leander hadn't been at State this year. But I would have considered it a travesty if Leander weren't at State, and I would have unequivocally supported their director if he had registered a complaint.

 

There was no "bad judging" at Area H.  There wasn't a 3, 7, 3, 7, 3 ordinal for them.  Leander was 4th out of 4 top notch groups which all deserved to advance.  I think the biggest flaw with Area H is the alignment of regions.  The regions that made up Area H do not have enough bands in them to make sure that there are 10 finalists and minimum of 4 advancing.   

Link to comment
Share on other sites

I have a longer list of question, too.   Most importantly, are the judges required to score of a 0-1000 point scale?  and how are ties in score resolved for each judge's ranking?   

 

I can nearly automate all of this for UIL but I need to know and understand the rules.     

 

I have been researching statistical methods of identifying outlier data (standard deviation, Tukey approach, interquartile approach) and methods of dealing with them (removal, modification, etc.).    I do not believe exclusion (which is by far the most common statistical method) makes sense give the small number of judges.    Rather, I am researching common methods of "normalizing" outliers.     In my opinion, they should not be fully neutralized, but rather be moved closer to the norm.   How far and how much is what I am trying to determine based upon other's research and accepted statistical methods.   

 

Also, I think that all of this needs to be applied to the raw scored prior to applying individual judge ranks.    Despite the clear emotional aspect of this issue, I want to present a logical and non-biased solution option which prevents future issues for all bands.

 

This will make some interesting data. We also have to bear in mind a some other things as part of the analysis: when you have 300+ people spread all over the field, the visual judges are likely looking at different elements at the same time, so one judge may see something that the other doesn't. And this unusual separation of visual judges in UIL will give a very different perspective on the program. Visually, these 2 items can cause completely understandable and I would even say potentially reasonable variation. For music, I believe these judges are clustered around the 50, and should be hearing essentially the same thing. It is possible that the judge is evaluating different performance aspects at different times (e.g. one judge is listening to brass, while one is listening to woodwinds, while one is listening to percussion), and can therefore get a different take on the overall performance. So how and when a judge samples will add variability in the evaluation both musically and visually. The question is how much, and how to build a statistical model that allows for that, understanding that all performing groups have strengths and weaknesses? How much variation makes sense? In my view the BOA model is really good here, as there are only 2 redundant judges (MGE). That system neatly sidesteps this entire issue (mostly). Unfortunately UIL has this problem in spades.

 

So reading through what I just wrote, the conclusion that I would logically take means that I like the suggestion that was made somewhere in this thread that the UIL judges are more focused within captions so there is little overlap between them. That simplifies what they are looking at during the show. I would also put the visual judges near the 50 yard line (45's maybe?). My opinion is that a head-on view is more appropriate for visuals, although maybe a case can be made for the angled view (would love to hear that case).

 

Maybe musically you have a brass judge, a woodwinds judge, and a percussion judge (with some special considerations for vocalists or the occasional string or electronic wind instrument factored in, and all could/should have opinions of the full ensemble sound too). Visuals could be split by guard/dance team/etc, and musical ensemble. I dunno, just a first pass set of thoughts.

 

Ok, so I rambled too long with several interruptions on my side. I'll post it anyway, and hope it is at least mostly coherent.

Link to comment
Share on other sites

There was no "bad judging" at Area H.  There wasn't a 3, 7, 3, 7, 3 ordinal for them.  Leander was 4th out of 4 top notch groups which all deserved to advance.  I think the biggest flaw with Area H is the alignment of regions.  The regions that made up Area H do not have enough bands in them to make sure that there are 10 finalists and minimum of 4 advancing.   

 

We didn't say there was bad judging. What was said was that they employed the "bad judges rule" which is what allowed Leander to move on.  And you are correct about the alignment.  We (LISD, Lake Travis and Westlake) should all certainly be a part of an area that moves more than 3 bands on to state.  

Link to comment
Share on other sites

I guess the first step in any new solution is an empirical determination of a "possible" outlier.     Once an outlier has been flagged, then maybe the next step is that any single judge with an outlier must justify his or her score to the other four.  If the other four find the justification has merit, then the score stands.  If not, then this triggers the adjustment phase (TBD).

Link to comment
Share on other sites

We (LISD, Lake Travis and Westlake) should all certainly be a part of an area that moves more than 3 bands on to state.  

 

Another solution to selecting bands to move on to the next level is to have a "wildcard" solution.    

 

For Areas, the top # bands qualify from each Area per current rules.   Then, there is some method where the next # bands from all areas are placed into a single group from which some addition # of bands are chosen to move on via some criteria.    This would allow for fair representation of all areas along with appropriate representation of top bands just outside the mark in a "stacked" Area.    I don't think Area currently has adjudicators assess scores (only ranks).   If there were scores, those could be used to determine worthy wildcard candidates for consideration.

 

I wonder if a similar approach would work for state finals...  Top 11 bands advance to finals using the current process, then all of the judges vote on the next 4 bands to pick 2 more to advance.   It puts the group of judges together as a team to ensure the line is drawn with the proper top bands advancing.   The guidelines could be that 12 advance if there are no outlier issues within the next four in standard order.   13 advance if there are outliers and the team determines those last 2 spots.     This is still just a brainstorm idea, so please beat it up or polish it from a turd into a diamond.   

Link to comment
Share on other sites

Areas do have scores which convert to ranks/ordinals.

I’ve liked the idea that each area gets an extra advancer for each band that medals at state from their area. Or even an extra advancer if every band in that area makes state finals. That way, the strongest areas do get to send more bands after proving they should, in a sense.

Link to comment
Share on other sites

This topic is definitely getting a bit wearisome with trying not to hurt feelings.  There's certainly more of a "walking on eggshells" feeling when I'm posting in this thread.  I hope by now that most people know me well enough to know that I respect every single band program in this state!  Of course I am going to root for Leander and LISD because that is where I live and Leander is the program my daughter was associated with.  That does not automatically make me AGAINST all other programs or districts!  Just because I don't mention them doesn't mean I don't respect them.  I think a good number of us feel the same way.  Please don't take this all so personally.  :(  

 

And let me mention again how moved I was by Alexander and their show this year.  I watched it again last night on YouTube.  So impressed!  I would love to see that show several more times and I'm sad I only saw it live once.  

Link to comment
Share on other sites

Reviewing ordinal scores/placements should be triggered by a percentage deviation. An ordinal point swing from 1 to 10 as happened at Area B or 1 to 9 at another Area is much more statistically significant than a swing from 74 to 84 or 100 to 110 as could theoretically happen at BOA SA or Indy Grand Nationals.

 

Percentage statistics are almost always more informative, revealing, and helpful than counting statistics.

Case on point:

 

Area D 20% anomalous outliers vs 6%.

Link to comment
Share on other sites

I think the Judson ISD and San Antonio NISD schools feel the same way

In regards to only sending 3 Bands to SMC, you might want to look at the UIL/TMEA Agreement on Alignment for marching bands for the 2018-2019 and 2019-2020 school years.   HTTPS//Align.TMEA.ORG/Align.1820/  It lists how many Bands it expects to advance to SMC based on previous history.   I think this explains some of the reasons why Bands are put in different areas.

Link to comment
Share on other sites

Maybe it's time for a 7A classification (only sort of kidding).

 

I had the same thoughts up until Leander ISD took 1st, 3rd and 5th at 6A classification last week.  Leander is nearly half the size of the big San Antonio schools and the big DFW schools.  I'm not sure about Vista or Vandy, but I doubt they are that much bigger.  Maybe have 6A go up to schools with 2999 kids, and a new 7A class for schools with 3000 and up.  

Link to comment
Share on other sites

Each time you add a new classification, travel costs for all sports, academic and fine arts competitions goes up for most schools, so I don't see it happening any time soon.

 

For reference purposes, here are the enrollment figures used for the last round of realignment by the UIL for all of the Finalists at this years 6A SMC:

 

Vista Ridge     2349

Flower Mound     3618

Vandegrift     2582

Hebron     3584

Leander     2199

Ronald Reagan     3518

CTJ     3083

Keller     2995

The Woodlands     4435

Marcus     3274

Cedar Ridge     2855

Waxahachie     2235

Link to comment
Share on other sites

Each time you add a new classification, travel costs for all sports, academic and fine arts competitions goes up for most schools, so I don't see it happening any time soon.

 

For reference purposes, here are the enrollment figures used for the last round of realignment by the UIL for all of the Finalists at this years 6A SMC:

 

Vista Ridge     2349

Flower Mound     3618

Vandegrift     2582

Hebron     3584

Leander     2199

Ronald Reagan     3518

CTJ     3083

Keller     2995

The Woodlands     4435

Marcus     3274

Cedar Ridge     2855

Waxahachie     2235

4435 students in one high school?  WOW!  that is a small college campus.

Link to comment
Share on other sites

I don’t know if this concept has been explored, but one game the band nerds I know like to play is top/bottom drop. You drop the top and bottom judges for each band then add the ordinals. This mitigate the risk of random judges, but keeps the integrity of the average judge.

 

Of course this would get awkward if a band’s top and bottom score are both in music.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...