Jump to content

Anthony V

Members
  • Content Count

    191
  • Joined

  • Last visited

  • Days Won

    2

Anthony V last won the day on September 6 2017

Anthony V had the most liked content!

About Anthony V

  • Rank
    Advanced Member
  • Birthday 05/27/1994

Profile Information

  • Gender
    Male
  • Location
    Portugal
  • Interests
    Reading really old books. Or new books about old things. Listening to wonderful music. Learning.
  1. The longer I'm connected to this activity, the more I realize how little anyone can understand from the outside-in what scores mean. I'm somewhere between being connected and being on the outside, so I feel hardly in a place to say anything about any particular scores. But speaking generally on the matter (and perhaps more briefly than... usual...) is something reasonable. Is judging subjective? Yes, and it must be, since judges don't have access to an objective, Platonic world of forms against which to evaluate performances. Does that make it entirely unscientific? No. Why? Because not all subjective viewpoints are tolerated. What viewpoints are tolerated? From educators, designers, or others with acceptable credentials. Is that sufficient? Generally and for the most part. Does that mean there is still bias in judging? Yes. Can judging be more scientific? Yes. How? Most generally, by the removal over time of viewpoints which do not reflect the tolerable range of scores a performance in a given caption is, according to present standards, deserving of--as well as the invention of new viewpoints which better and more consistently reflect the tolerable range. And how does that occur? A incalculable number of ways, but all of which put judges in positions in which they must rank-order certain novelties or excellences over others. Are there other necessary conditions? Yes, and a vast number which have yet to be determined, but one of which is the aggregate continuity of judging experience within the community. Okay, they're kicking me out of the coffee shop now.
  2. Shout out to the N.Crowley band. You've improved so much since Mansfield POC! Good for you!
  3. You're very welcome! I'm glad that Arlington was in the mix, and that they are yet another solid band program in the area. Thank you for being part of what makes the fine arts stick around in north Texas!
  4. I think Trinity has a better shot at area finals than I previously thought. Richland is a moving target right now, so ask me again in a week when the area results are posted and I'll update my predictions.
  5. It may also be worth noting that a similar situation happened regarding Rosemount. They barely made it into finals, whereas a lot of people were placing them into the upper half after seeing the stream. Sometimes these confusions can happen, especially if you're not live for the event. Addendum: At any rate, I don't think any Richland kids or parents reading this should hang their heads too low. They are quite good, and I suspect they know it. I've seen them in person several times. But a lot of other bands are very good too, and this just is how the chips fell.
  6. As principalagent said elsewhere, partly it's the rain. As a result, prelims was their 2nd run-through, or so I hear. But that's not the whole story, of course, because Trinity has gotten a lot of rain as well. So, I think partly they are still trying to find their identity design-wise, or in some crucial components of in-house comportment. (As to what, exactly, I would be throwing darts to find out.) With their past couple years of success with Cartwright, I thought they had everything nailed down, but evidently not. Last time they went to Indy they got 22nd and scored a tad higher. My honest opinion is that they have in fact improved since then. However, I am worried that they are not able yet to consistently keep pace with the increasing depth of competition. And I honestly don't know what it is precisely that they need to do to keep pace in that way. The fact that they won Mansfield, especially since we don't know the score spread and there was only a prelims, I don't think was able to tell us much about where they are this season. Maybe you meant you were surprised -- and perhaps this is more what you meant -- that they placed where they did given what everyone thought after seeing them by stream or live. That... I am not fully certain how to account for. It wasn't like a VGE judge blew out the score or something. The numbers were generally very depressed and flat across captions, compared to finalist standards in prelims (although it is noteworthy that they were lower in individual performance captions and in VGE). But the fact that there was uniformity in the judging, notwithstanding new procedures, is a sign of reality. Recall I've talked before about how a range of legitimate scores can be given to the self-same performance. I think Richland swung very low in that range, for whatever reason, whereas people were tending to put them up higher in that range -- and probably it could have very well happened with a different panel -- given their success in recent years. That's my best shot at an explanation.
  7. The past few years I've jokingly asked Kathy Handler when we're allowed to start submitting bets and pledge amounts for the forecast. It could be a fundraiser at this rate.
  8. I will likely be at this contest all day, and will try to provide up-to-date information on what's going on. After the contest, I'll provide my thoughts on both the competition and exhibition bands in detail -- much like I did for Mansfield, except a bit more filled-out. (I did not prepare as well as I should have!)
  9. A note: the tier rankings were generally constrained to the block in question.
  10. Sorry for the belated assessment that I promised. A fair warning: it's blunt. If you've come here for praise, you may not have received any. This was just an honest assessment of the preliminaries. Some of my notes were on my phone, but didn't get saved, and then my phone died. So I switched to pen and paper, made fewer comments, and more tier placement remarks. I ranked bands within their blocks, and then attempted to rank across blocks. Here we go, in performance order... Block 1. Haltom -- //Tier 1, High 70's, Low 80's// I was a bit behind schedule, so I did not watch Haltom except from the sidelines. Musically, they blend well and individuals generally produce a high quality of sound. Western Hills -- // Tier 3, 50s // Unfortunately, my notes for Western Hills were lost. I placed Western Hills in tier 3. Of the bands in their class, they were the smallest. I could have seen them and Seguin flip-flopping because their level of individual achievement exceeded Seguin, I think. If I remember, Seguin had 15-30 or so more numbers which, at that classification, makes a large difference. Looking above, I accidentally posted the class C results instead of class A. The latter results were, in ascending order, Western Hills, Timberview, Seguin. Timberview took both auxiliary captions. VR Eaton -- // Tier 2+, mid-high 60's // I give Eaton my "honorable mention" award of the day. They had a fun, but dark, carnival-themed show. They have room to grow. They're strongest in visual and GE. It's hard to guess scores, though, especially since Haltom went first and that might deflate things. Lamar -- // Tier 2-, low 60's // [sparse notes. I'm pretty sure they were lost. My phone died and I didn't my pencils and notepad.] "Hard times don't create heroes... they reveal the hero within." 4. Summit -- // Tier 2++, mid-high 60's, low 70's // [i didn't write any comments. As an ensemble they project well, but they play with a rather harsh sound. Visually, they marched fairly well relative to their peers. Their show design was congruent with their level of achievement.] Block 1 Rankings: Haltom Summit/Eaton Eaton/Summit Lamar/Seguin Seguin/Western Hills Western Hills/Seguin. [i very undecided on where to put Seguin.] Block 2. Legacy -- // Tier 2/2-, low-mid 60's // Music (Tier 2-). Okay music [performance]. Visual (Tier 2-). Severe visual issues at times. GE (Tier 2-). Man versus machine [was the show theme]. Richland -- // Tier 1/2+[+], mid-high 70's, low 80's? // Music (Tier 2+[+]/1-): Electronic issues. Muddy opening articulation, tier 2+ music performance in the intro. Some balance issues [keeping them from an obvious tier 1 music performance]; [i do not think they were] listening. [Great brass] projection, [along the lines of Summit]. Good tuning. Musically good [in terms of tone quality and ensemble cohesion], comparatively [i.e. relative to their closest peers]. Visual (Tier 1/1-): Pretty clean. GE (Tier 1/1-/2+[+]): About the size of Legacy. Only through ballad. Best design at least since Haltom. [Afterthoughts: They had a very high level of individual and ensemble achievement relative to their closest peers. Pound-for-pound, they were of the highest quality thus far. And the jazzy part was the most engaging, energetic, and fun part of the contest, of the bands that I saw from the stands. I think it was also their best performed section of the program.] Arlington -- // Tier 2, mid-60's // Music (Tier 2/2+): Excellent trumpet trio. Good brass throughout. Need more confidence and balance [though]. Same for [woodwinds]. Tuning issues. Musically, not terribly far away from Richland. [(In retrospect, I think there was a bigger gap than I estimated.)] Visual (Tier 2): Lots of marching content; very difficult. Not as clean as Richland. GE: (Tier 2): [Performed] partly into closer? Burleson Centennial -- // Tier 2+[+], Low-mid 70's // Music (Tier 2-): Some electronic issues. Ensemble balance issues. Articulation issues. Follow through! (Referring to phrasing.) Needs polish, consistency. Tuning issues. Not as confident after [uniform change]. Visual (Tier 2+): [No visual notes, but I think I meant tier 2+ since I estimated them in the low-mid 70's.] GE: (Tier 2+[+]) [First impression:] Tier 2+ design. Better design than Arlington. Uniform change. Full show? Closed well. North Crowley -- // Tier 2-, low 60's // Music (Tier 2/2-): Tuning [in the woodwinds] is bad. Low [individual] achievement. First hit [was effective.] Timing issues in ballad. Good closer opening, weak after[wards]. Visual (Tier 2-): Sloppy. Low visual demand. GE (Tier 2-): Mediocre intro design [compared to peers]. Integration not impressive except at impact. Block 2 Rankings: Richland/Burleson Centennial Burleson Centennial/Richland Arlington Legacy/ North Crowley North Crowley/Legacy Block 3. Lake Ridge -- // Tier 2+/-, high 60's // Music (Tier 2/2-): Phasing issues [and] balance issues. Okay impact achievement. Consistency!! [i.e. not following through or maintaining full shape of dynamics.] Not there. Harsh brass. Visual (Tier 2): Visually dirty. Cleaner at moments. [Partly due to] visual design lack of clarity at times. GE (Tier 2+[+]): Good show rep[ertoire]. But not quite [tier] 1- range. Achievement lower: [only] 2+ because [lack of achievement in the] closer. Burleson -- // Tier 2+/-, low 70's // Music (Tier 2+/-): Phasing [but then] decent intonation. Balance? [i.e. there wasn't a corresponding ensemble balance.] Good solos! Double tonguing! Great brass projection.Tier 2+/1-? [Then the show continued on, and I commented] No. Then 2/2-. Visual: (Tier 2/2-): Tier 2/2- at first blush. Demanding. Just pretty sloppy. GE: (Tier 2/2+): High achievement at times. Mediocre rep[ertoire]. Staging?? [Meaning there were some times where drill placement was not clearly fitted to musical presentation.] Ballad [was the second movement]. Barber of Seville [was fun, and made me smile.] [Three-quarters] of the show as put on the field. Grand Prairie -- // Tier 3, high 50's, low 60's // Music (Tier 3/3+): Intonation!! [Meaning that there were severe tuning issues.] [Tier] 3 so far. [i.e., as of introductory movements.] Harsh [brass sound.] Inconstancy [in following through on note value and dynamics.] Visual (Tier 3): Dirty, exposed. Lots of individual errors. GE: (Tier 3): Vanilla [tier] 3. Decent achievement. Sam Houston -- // Tier 3/3+, High 50's, low 60's// Music (Tier 3/3+): Mediocre 3. Brass no too harsh, but a bit splatty. Balance issues. But decent for [tier] 3. Visual (Tier 3/3-): Intro dirt. High demand. Some achievement. Tier 3- intro visual. Ind[ividual] errors galore. Upper body!! Some [visual] ensemble achievement. GE (Tier 3+/2-): [No notes.] Timberview -- // Tier 2-, Low 60's // Music (Tier 3+): Mic issues. Not over-exerted but a bit splatty. Brass consistency: follow thr[ough]. Controlled. [Tier] 2-? Visual (Tier 2-): Bad visual [ensemble] (cover-[down]). Slightly cleaner [and] tighter than [sam Houston.] Ind[ividual] errors. Horns up [errors]. Cleaner than [sam Houston]. Closer [is visually] messy. GE (Tier 2-): Mic issues. Tier 2- design [repertoire]. Vis[ual] demand more in [the tier] 2- range. Block 3 Rankings: Lake Ridge/Burleson Burleson/Lake Ridge Timberview Grand Prairie/Sam Houston Sam Houston/Grand Prairie By the end of prelims, but before announcements, my scoring/ranking would have been something along the following lines (give or take a couple points): 1. Haltom -- 81 2. Richland -- 78 [3. Mansfield -- 76, had they competed] 3. Burleson Centennial -- 75 4. Burleson -- 71 5. Lake Ridge -- 69 6. Summit -- 68 7. Arlington -- 66 8. Eaton -- 65 9. Legacy -- 63 10. Timberview -- 62 11. North Crowley -- 61 12. Lamar -- 60 13. Sam Houston -- 58 14. Grand Prairie -- 57 15. Juan Seguin -- 56 16. Western Hills -- 54
  11. My pre-contest list of likely finalists, sort of in order: Locks (1-5): Keller Flower Mound Duncanville Coppell Keller Central Probably Locks (6-7): Aledo Mansfield Not locks but fairly safe bets (8-9): Summit Trinity The "Wouldn't surprise me" list (10-13): Byron Nelson Grapevine Legacy Lewisville
  12. A minor note: There is indeed scoring for design in the UIL area/state rubrics, even though there is not a separate caption for it. It is not much, but it is integrated into the music and marching captions. For those unaware: I) UIL marching adjudication is broken down into 4 metrics: (40%) Individual Marching, (40%) Ensemble Marching, (10%) Drill, (10%) Integration of Marching Components. 1) The "drill" sub-caption is generally concerned with the difficulty of the drill, and whether it is a good match for the achievement level of the performers. 2) The "integration of marching components" sub-caption is concerned with the drill's degree of fitness to the musical presentation. 3) Both of these together come reasonably close to what is expected of VGE, with the major exception that the guard is not adjudicated here. It is evaluated as a sub-sub-caption under "individual marching" vis-a-vis "handling of equipment." It often makes little difference in terms of score. II) UIL music adjudication is broken down into 3 metrics: (33.3%) Brass, Woodwind, and Percussion Performance, (33.3%) Ensemble Performance, (33.3%) Musicianship. 1) The first two sub-captions are basically two layers of music ensemble; the first concerning particular ensemble contributions, and the second concerning the ensemble as a whole. 2) The last sub-caption is like music general effect, except it makes no reference to any fitness with the visual presentation, nor directly to repertoire. (I cannot remember offhand if the BOA rubrics explicitly state that music GE is judged that way, but in practice it probably is.) The key words that the UIL rubric co-authors chose to use here were along the lines of "artistry," "maximum use of dynamics," and "control of tempo" -- in other words, the things which for the most part make the piece come alive and be effective. Now, of course you need music which is actually written such that the performers can manifest those excellences, and so in that way it implicitly calls for effective music. But beyond this, your music program is generally not required to be anything other than UIL Blue Bell Vanilla in order to do maximally well.
  13. I think Mansfield is a contender for Area finals. There are some bands they would have to step through given their placement at HEB, but I don't think it's out of the question. My opinion of their exhibition performance yesterday is that they would have been sparring for a top 3 finish or, less likely, only a top 5 finish. If you're a student at MHS, you could ask Mr. Ludlow (but not post here) if the judges suggested where Mansfield would have placed.
  14. I said this last season -- maybe the season before -- and I'll say it again this season. The norms of the adjudication/teaching community determine what design or performance elements get such-and-such amount of points. The adjudication handbook, by contrast, is an extremely vague reminder of these norms. The direction of causality is important. But even the norms themselves are not perfectly unambiguous. And wherever there is ambiguity, there is legitimate diversity of interpretation. The upshot: there is not a single score that corresponds to a band's performance or design element (e.g. music ensemble) which the judges are trying to unearth. There is, instead, a general range of acceptable scores. Directors, as I understand, may in fact dispute a score if they feel that something is terribly out of line. So, when we poke our noses into the fancy numbers on the spreadsheet, we need to bear in mind that the number provided by a judge is just one of many legitimate scores -- and sometimes even rankings -- the band could have received in that caption. Often the results could have been different to some degree, and it would have been legitimate. At this point, people are often inclined to throw their hands in the air and repeat the trite, overused phrase: "Well, it's all subjective in the end...." But that's not the way this works. There can be scores which are out of the range of acceptability as determined by the adjudication/teaching community. The goal is to, over time, gain a better and better grasp of what adjudication ideals are implicit in both the vagueness gestured towards by the adjudication handbook, and that of the adjudication/teaching community. With that said, we need to realize two things. Firstly, we need to realize the nascent stage of the marching arts more generally. Most disciplines, by contrast, have been around for centuries, and have had many more opportunities to consider at length what counts as good or bad assessment. Secondly, we need to acknowledge that, at least partly on account of this nascent status, there is not an often-used direct forum, so far as I understand anyway, for moving towards better adjudication ideals. Most of the impetus seems to come from designers, who put adjudicators in novel spots where they need to make discriminations that are not usually made. And so far, this is working. But it takes time. I am not denying that there may be judges whose scores, at least from time to time, are on the verge of unacceptability. In fact, it would be a bold claim to say that adjudication error never happens. But we need to put this in the context of what counts for legitimacy. That is, in fact, a range of acceptability. I think what I am saying is compatible with some of the present concerns by some of the above posters.
×
×
  • Create New...