- Whereas when played separately it would be an referred to as an arpeggio. But in harmony we might still refer to it as a chord, as in saying, arpeggiate the C# minor (chord) to start moonlight sonata.
This might better be described as arpeggiating C#m second inversion or even C#m/G# in the right over C# in the left...
This is getting possibly-weird but you could call it an arpeggiation of G#sus4(#5)/C#
- As per my knowledge, and as per Britannica, a chord actually uses three or more notes. A two note structure is called a diad, which implies a bit of confusion in the term "power chord" (written as 5, as in G5, which == G D == 1 5).. as it is not by definition a chord but a diad.
This may be a pedantic clarification, but that is the definition
- I find it's a really scary idea to go to a third-world country like USA! - Or, it seems like a third-world country when you're from a developed country.
Unfortunately I'm type 1 diabetic which is either a death sentence there, or you are rich. But then I also accidentally broke my collarbone this summer. And I had this weird throat infection.
As a type 1 diabetic I have had to go to the ER numerous times in my life, and as traumatising as that experience can be, I can't imagine the feeling of also being financially ruined for the pleasure of not dying.
It's weird, I know plenty of people who avoid going to the doctor because it's annoying to wait for over an hour.. but I could see that for an American you might just NEVER go. I guess it's like, ok someone in the family is sick, and we're not rich, so we will sacrifice them to the sun gods. Or I guess you go into indentured servitude? There are people for whom $100k is made very very slowly.
I just do not get it! I guess these things add up properly if you are very wealthy, but people here think that it's the function of the society to make sure you can have basic things like medical treatment. Of course it's not perfect here... try going to the dentist for example. Then you're almost in the USA. Somehow it's not considered medical. A long session could run you over $500 or more.
I have compassion for all the USAians out there! If I was USAian and planning to have a child.. I guess I would consider just going to a civilized country for a while, like... I don't know, Rwanda or Ghana or somewhere that can afford people to be alive(?). In seriousness though (as those countries are very far) you could just go North or South. It's closer. If you go to Canada, a lot of times you wouldn't even need to present ID.. you could just.. get treated right there by walking into a hospital, being triaged, and then waiting (admittedly for maybe almost a day).
Always bring fun things to the hospital if you aren't literally bleeding out all over the place, because you'll be in the waiting room for 12 hours if you're in a big city.
But when it's over you're alive and about as rich as you were before. Seems like a good societal deal to me. I'm scared of being trapped somewhere like the US honestly. It's a nightmare scenario. Although I'm sure it's pretty great if you're a billionaire with slaves and so on... but someone has to incur that cost (the slaves from the lower castes).
- It seems like many people online say that there is no perceptible difference between e.g, 320kb/s mp3 and "flac" (which usually means CD-quality 44.1kHz 16-bit). Often sources say something about there being no way it sounds different and then talk about the Nyquist theorem [0].
Personally I don't get it as for me, there is a clear difference not only between mp3 vs. CD, but even between different bitrates beyond that. Maybe I'm not typical as I've been usually listening to stuff through studio monitors and also usually through a recording interface which handles 192kHz and >24-bit.
Definitely I noticed on certain systems you aren't going to notice a difference as the system itself is the bottleneck (i.e, bluetooth). In my experience though if you use the right driver, so ASIO or WASAPI in Windows (or anything in Mac and Linux nowadays), I can tell the difference instantly on recordings I know well.
Most music did not get released in ultra HD but some things are available in 96kHz and beyond. I recommend checking out Radiohead, Bob Marley, or Pink Floyd in ultra-HD ( >= 48kHz, >= 24-bit) as there have been releases. I have found Bob Marley - Legend in 192kHz 24-bit and it sounds incredible. You can hear each individual member of the percussion section.
[0] https://en.wikipedia.org/wiki/Nyquist-Shannon_sampling_theor...
- This is super cool! I would love to see the specific mapping of keys that were used which were said to be inspired by the accordion. Even though that's a way less interesting detail than the way that the spoon picks up the distances and the "bit banging" used to achieve 8-bit precision on the modulation from two 4-bit connections.
Sounds pretty swell!
I wonder if the spoon controller could be adapted to send modulation parameters to arbitrary instruments via a midi port. I would buy a spoon modulator if it was reasonably priced. It would be a great add-on to a piano style keyboard without pitch bend or mod wheel etc
- Tried my personal benchmark on the gpt-oss:20b: What is the second mode of Phyrgian Dominant?
My first impression is that this model thinks for a _long_ time. It proposes ideas and then says, "no wait, it's actually..." and then starts the same process again. It will go in loops examining different ideas as it struggles to understand the basic process for calculating notes. It seems to struggle with the septatonic note -> Set notation (semitone positions), as many humans do. As I write this it's been going at about 3tok/s for about 25 minutes. If it finishes while I type this up I will post the final answer.
I did glance at its thinking output just now and I noticed this excerpt where it finally got really close to the answer, giving the right name (despite using the wrong numbers in the set notation, which should be: 0,3,4,6,7,9,10:
The correct answers as given by my music theory tool [0], which uses traditional algorithms, in terms of names would be: Mela Kosalam, Lydian ♯2, Raga Kuksumakaram/Kusumakaram, Bycrian.Check "Lydian #2": 0,2,3,5,7,9,10. Not ours.Its notes are: 1 ♯2 3 ♯4 5 6 7
I find looking up lesser known changes and asking for a mode is a good experiment. First I can see if an LLM has developed a way to reason about numbers geometrically as is the case with music.
And by posting about it, I can test how fast AIs might memorize the answer from a random comment on the internet, as I can just use a different change if I find that this post was eventually regurgitated.
After letting ollama run for a while, I'm post what it was thinking about in case anybody's interested. [1]
Also copilot.microsoft.com's wrong answer: [2], and chatgpt.com [3]
I do think that there may be an issue where I did it wrong because after trying the new ollama gui I noticed it's using a context length of 4k tokens, which it might be blowing way past. Another test might be to try the question with a higher context length, but at the same time, it seems like if this question can't be figured out in less time than that, that it will never have enough time...
[0] https://edrihan.neocities.org/changedex (bad UX on mobile! - and in general ;)). won't fix, will make new site soon) [1] https://pastebin.com/wESXHwE1 [2] https://pastebin.com/XHD4ARTF [3] https://pastebin.com/ptMiNbq7
- Monad Diad Triad Tetrad Pentad Sextad Septad Octad Nontad Dectad Monodectad Didectad/Bidectad (?)
Or,
1 Note/Unison
2 Interval/Diad
>3 Chord
And, I agree an interval is essentially a distance. Distance between three points makes no sense as they might very well lay outside of one straight line. Even they are on the same line.. are we measuring the distance between each distance?
It's ambiguous what that might even mean, but the original poster might think of a collection of intervals which is 0 or more notes with intervals relative to a given root.
For example if you think in integers (pitch set notation):
They might have different numbers of notes but I see them as the same type of identities. I just call them all changes.m6 { 0 3 7 9 } Minor 6 5 { 0 7 } Power Chord ma13 { 0 2 4 5 7 9 11 } Ionian N.C { } No Chord/Rest °7 { 0 3 6 9 11 } Diminished Seventh/Dim Seven .. etcAlso note that 13 means two different things, either the septad above or a pentad of the form 1 3 5 7 13 aka 7(6) "dominant add six"
So in set notation it's:
ma13 { 0 4 7 9 11 } 13 { 0 4 7 9 10 } Etc.. - That particular algorithm doesn't care whether the instruments are guitar or otherwise. There are other algorithms in vamp that would deal with individual notes. But in terms of separating tracks, vamp doesn't do that. There are some new ML-based solutions for this though. So you could separate them and run vamp on those outputs.
But to get the chords I don't think you need to worry about that.
- What a cool thread! I like how you put the specifics of your workflow and especially details of the commands you used! Particularly with the vamp commands, because as you say, they are somewhat inscrutably named/documented.
I started dabbling with vamp as well a couple years ago, but lost track of the project as my goals started ballooning. Although the code is still sitting (somewhere), waiting to be resuscitated.
I have had an idea for many years of the utility of having chord analysis further built out such that a functional chart can be made from it. With vamp most of/all the ingredients are there. I think that's probably what chordify.com does, but they clearly haven't solved segmentation or time to musical time, as their charts are terrible. I don't think they are using chordino, and whatever they do use is actually worse.
I got as far as creating a python script which would convert audio files in a directory into different midi files, to start to collect the necessary data to construct a chart.
For your use case, you'd probably just need to quantize the chords to the nearest beat, so you could maybe use:
vamp-aubio_aubiotempo_beats, or vamp-plugins_qm-barbeattracker_bars
and then combine those values with the actual time values that you are getting from chordino.
I'd love to talk about this more, as this is a seemingly niche area. I've only heard about this rarely if at all, so I was happy to read this!
- I really appreciate the opinion that using args and kwargs is "bad".. It always annoys me when you get them as your parameters.. even worse when you go to declaration and the declaration also contains unlabeled parameters. A lot of wrapper libraries seem to do this. I try to just name every parameter I can. It's so much easier to use this code now that we are using autocomplete in the IDE.
- I really like the link you provided and have watched it before!
But - I do want to say that C == green is not arbitrary at all. It is consistent with my calculations, which are consistent with Newton's calculations. Usually I see the colours being assigned to notes as wrong.. but this C == green is consistent with mapping light, using octave equivalence, given by the following:
Let's say that C == 261.63 Hz, and that Green == 5.66 × 10^14 Hz. Using the preceding formula we can make a small (python) program to check whether C == Green.f_prime = f * 2 ** (i / 12) # Where, # f' is the derived f, (in this case Green {5.66 × 10^14 Hz}) # f is the reference f (in this case 261.63 Hz) # i is the interval in semitones
We can look up colour charts like [0] or [1] and find that this frequency is in fact associated with the colour green.light_range_min = 400 * 10 ** 9 # Hz light_range_max = 790 * 10 ** 9 # Hz C = 261.63 # Hz octave = 0 for octave in range(100): # we are just using a high number here f_prime = C * 2 ** (12 * octave / 12) if f_prime >= light_range_min: octave = octave break print(f"C in the range of light has f == {f_prime}, which is {f_prime / 10 ** 9} THz. We had to go {octave} octaves up to arrive there") # outputs: C in the range of light has f == 561846146826.24, which is 561.84614682624 THz. We had to go 31 octaves up to arrive thereThe rest of your commentary seems valid.
- It could be related but I also want to weigh in here and say this. Hypoglycemia can occur with no relation to the other side of diabetic symptoms, i.e, hyperglycemia. In other words there are people who suffer from hypoglycemia without ever getting high blood sugar, and so they are not "diabetic" which would mean you can have issues from both directions.
- As a piano/keyboard player, a lot of musicality is possible on a keyboard. It is possible to learn to modify the technique to better utilise the velocity available to a particular keybed, weighted, or non-weighted. When playing keyboards you are working within a subset of the potential dynamics available to a piano. Though expressivity is lessened, there is still a huge palette once you learn to use less total force and less differentiation in force (dynamics).
I know I can play with high musicality on almost any keyboards with velocity, because I was blessed to have learned to use bad instruments. But, it doesn't compare to the depth of the sound generated by all the moving parts and interactions happening in a real piano. Not only the sounds, but also the sheer weight of the keys.
Most* keyboards/vsts are just triggering a (pitch-shifted, looped) sample at a given note and then doing that for n notes and that doing an additive sum.
That is definitely not what occurs in a piano though. There you have the 3-dimensionality of the physical world, like the way waves are traveling through distance and shape. When sounds' harmonics interact, resonant nodes in overtone sequences can trigger each other to resonance, which can trigger other resonances throughout the tone. Maybe you know the feeling of depressing the sustain/damper pedal while sitting in front of it and giving the instrument a smack (or holding down the keys you are not playing and doing it). Or running your nail or a pick over all the low notes with sustain.. like you are in a cave.
In midi/digital, there's the fact that dynamic is usually gonna be 8-bit. Just because midi did that and it made sense at the time, other keyboards and VSTs mostly follow suit. I'm surprised this gets generally passed over. Obviously there's more than 128 strengths of note in real life.
But all that said I think it's possible to learn keyboards/music theory/songs/playing on a non-weighted keyboard, but false to say that digital/non-weighted is equivalent to acoustic piano. But you only really need that for really dynamic music like Jazz, Classical, Instrumental et al. But it feels so very wrong to play that kind of music on bad keyboards.
* Roland V-Piano, and PianoTeq, as well as many I'm unaware of do in fact use physical/acoustic modeling as opposed to triggering samples, but it has not been predominant even among high-end digital instruments
- I've been using Firefox for about as long as I can remember, and really don't notice sites not working. I do notice, however, that using any browser without UBO makes my eyes bleed in an unending agony of capitalist garbage. It's like using a browser and then putting sand in your eyeballs.
- Kind of awesome how Behringer went from being the crappy version of gear to being the generic, affordable version of actually good gear within the past ~10-15ish years.
When I was selling gear just before my estimated time-line it was known to be basically Yamaha* or Peavey-ish company (does a very wide line of products at affordable prices), but "ever-so-slightly" crapish. Since then they have become a/the goto if you want essentially Moog or those other big names for less than 1 carrot.
I have a Prophet 5 v2 so no real need for me (in terms of analogue synth), but great to see this kind of stuff getting into the hands of more people.
Sure, the pots, etc. aren't quite as solid as the big ($) versions, but that kind-of doesn't matter at all, for a lot of use cases.
I might even get one of their thingies at some point. For less than the price of a beater car it's become a tempting case to have GAS [0] for Behringer [1].
[0] https://library.oapen.org/handle/20.500.12657/48282
[1] https://www.sweetwater.com/c510--Behringer--Synthesizers
*Yamaha actually makes a lot of very high-end gear as well, from piano to guitar, they have fairly extensive custom-shop stuff. I brought them up because, in addition to handling mid and high-end gear, they always made really quality entry-level gear as well.
- > since its width was set to 0
Is this totally necessary? My question is genuine. I don't know much about font programming.
> I’ll add the option extend the list, and see if I can find a better list of defaults.
If the values have to be hard-coded. If you can get me a contact info, I could send you the master list of chords.. maybe you could use that. My username is not really figurative.
I could see multiple "versions" of the font. There are different conventions one could use to spell chords. Even something like Hal Leonard vs. Sher use different conventions. There's symbol shorthands, like writing aug vs. + or even ma7 vs. '△7'. Dim vs. a circle. There are quite a few different ways to do it so I think it could be cool if you could pick a version. But I could be overcomplicating this.
> I think a more “advanced” use case like the one you described can be addressed by something like https://lilypond.org
Lilypond is a music engraving system. That means that it is producing sheet music. That isn't really overlapping with what you're making as that is not sheet music, but more of a shorthand. I'm actually more interested in the kind of approach you're doing, as I hate sheet music. For me, what music notation needs is simply the chord, and the time. That's why I mentioned the use of '|' to indicate bar lines. Actually writing down and reading individual notes is something that basically takes years to learn, and I don't consider it in line with my (subjective) definition of modern music. Usually by the time people learn to read sheet music, they've completely missed real "playing", with all the effort going into sheet music instead. If you're someone who improvises, you just need the harmonic structure (chords over time).
So another question which might be simple-to-answer would be, can we make a font that just has some kind of thing (glyph, symbol, unicode code point) which raises the thing above the thing? Or do we absolutely need to hardcode it.
- I'm not sure why I got down-voted, but I guess I could rephrase and establish my tone better. My previous comment could have come out as being critical. I just wanted to ask, would it possible/worth it time-wise to do a similar thing but support imputing more different strings into the ligature part? This is really cool, and I could totally see using it, if you could a few more chords. Great work! This would cover a lot of different tunes really well, and is a way better presentation then you normally get using plaintext and two separate lines.
- What if I want to name arbitrary text in the ligature instead of your list of chords? What if I disagree with how you spell minor as 'm' whereas I want 'mi' or even '-'. How about things like half-diminished, which is often written as 'ø7' vs. calling it a mi7(♭5), like a particular music school I went to prefers. Also, if you're sticking to possible chords qualities, it's 2 * 12 == 4096. I've implemented functions that return the spelling of chord quality over that range, and can return a "proper" answer dependent on rules, but I think in your case you could/should be letting the user call the chord anything they want.
Your presets didn't even include the chord D6 for example. Why have 7th chords like A7 and not have A6? So either cover the chords, or don't pretend that it supports chords. You could change it to say "supports chords that [you] in particular have heard of". Unless there is a way to switch into that mode (where I can speak chords)? I noticed the first test string of \am7\ works, but can't get flats or sharps to work. It really should accept text like this:
, or anything you want to write. I have a use-case where I invented a new way to describe chords, so any possibilities in the "normal" world of music don't apply. I also have a use-case where I want to be able to say any of the 12 * 2 * 12 things squared (squared because of "slash notation" adding a whole new identity to the chord symbol) what you could potentially be able to say in 12 roots 12 tone harmony with an extra component in the bass. There are 2415919104 possible inputs to this, but the thing is, there are multiple ways to spell that.Abmi6(#5,add9)/Db5I propose you do a chord spelling where to font only accepts capitalized letters as note names. Then you can use 'b' as flat '♭' and have no conflict with the note-name called B.
Also you can specify chords using numbers. Numbers accept accidentals before the number part of the string as opposed to after in the case of "alphabet" note names. And numbers can be spelled either "Nashville" like 1,2mi,3, or "Roman" like the following example. Notice to that in that one any minor or diminished or half-diminished chords use lowercase numerals.
You could use numeral-glyph substitution when encountering substrings with consecutive members of "IiVv" that are also valid numerals.Summertime - Gershwin |Imi | iimi7(b5) V7 | Imi | | |ivmi7 | | bVI7 V7 | ...Also it would be nice to be able to use the character, '|', to add measure lines, fer countin', fer the slower ones out there like yours truly. I could see this as being useful if it could be used a little more comprehensively, like in the ways I've outlined.
"Easy" thing for you to do to improve this drastically is to make it so you can just have any old text you want in between the backslashes. Or have another entry point in the font where that is possible.
- TL;DR: Don't eat carbs
Am type 1 diabetic; can confirm that abstaining from raising blood sugar is even simpler than consuming carbs + insulin. The first option of abstinence is an empty equation, which is easy to reason about, where the second case involves addition between two (theoretically cancelling) variables, which is prone to variation and/or error.
When you are diabetic, especially Type I (which is more severe), your food consumption on the momentary level is dictated by the push and pull of insulin vs. carbs. So, to be the devil's advocate, let's say you follow the general recommendation and eat essentially what you feel like and match it correctly with insulin. You would take your measurement/estimation of the result of summing the insulin's graph with the food's graph and you achieve a difference of +/-0 mmol/L glucose/blood over some period of time. That's not going to be a flat line because insulin has a generic shape, whereas foods have their own individual shapes (on the graph), but you're generally doing pretty well if there's no total variation at the end. That can be almost as good as just not eating carbs, but that's also assuming that there aren't negative effects of taking external insulin besides erratic blood sugar.
There are just less moving parts if you eat real food (vegetables, meat, nuts, oils) and skip filler (bread, pasta, potato, sweets, pops). Then are you also consuming food with significant nutritional content per calorie. I would even go further and say that this kind of diet would likely benefit the health of non-diabetics as well. It's literally better food, and for me has gotten to the point where I think of carbs as basically garbage. I still eat them, and do insulin, but am just pointing out how much better it is to just factor it out where possible.
And then if you're gonna eat carbs you can pick lesser evils, like brown things vs. white things. Or rice vs. wheat. Or diluted fruit juice (let's say with sparkly water) vs. Coke. This reduces both variables in the equation, the up (carbs) and down (insulin), which makes the entire equation less drastic to balance. So this is really basic math (0 vs. an equation with two vars) in terms of what the solution to diabetics' diets is.
This opinion probably not backed by the FDA but I've heard that cinnamon and yams/sweet potatoes can passively help with blood sugar control in a gentle kind of sense.
- I don't understand why you would get downvotes. Maybe it's because you're making a general point, non-specific to the topic at hand (Fediverse). Or maybe it's because Capitalism is the dominant ideology around these parts (Earth).
It's obvious to many, who've seen through the motive of greed, that collaboration yields more "economic" results. Economy can be a synonym for efficiency, but in modern usage is the definition of a caste system where groups exist primarily to extract the blood of those doing the real work.
But people practice doublethink, equating lord-serf "economies" with effectiveness, because greed (the motive to serve ones own interests) == good, and because profit (the act of taking excess value generated, for oneself) == good. Because somehow a selfish motive makes the work legitimate, because that's how "the world works". Or because there are examples of useful work having happened under this system.
Until they suffer for it many won't be able to see the problem, but luckily the vampires get more and more brazen, to the point where even people living life "the right way" have began stuggling in many cases. So when the previously-sheltered upper castes begin to bleed out into the ever-gaping maw of moloch [0], there will be change in perception. The Capitalists are insatiable (luckily [?]), which I think will eventually undo the idea. It will take more blood, and more time, but I feel it's starting to happen.
All this to say, that you are not alone in your vantage. And here's Ginsberg excerpted, on what I interpret to be this exact topic (which granted, is only my personal interpretation).
"Moloch the incomprehensible prison! Moloch the crossbone soulless jailhouse and Congress of sorrows! Moloch whose buildings are judgment! Moloch the vast stone of war! Moloch the stunned governments! Moloch whose mind is pure machinery! Moloch whose blood is running money! Moloch whose fingers are ten armies! Moloch whose breast is a cannibal dynamo! Moloch whose ear is a smoking tomb!"
- It seems there are two types of people. Those who think governmental/financial/legal/medical/technological/etc systems generally serve the public's interests, and those who don't.
Internally I refer to the former as "systemic" people. They believe something is good for you because the FDA said so. I find that these people also buy into capitalism. E.g, poor people shouldn't be so stupid/lazy, or their parents shouldn't have been stupid/lazy. I find these people often are privileged and benefit from power imbalances imposed by the system.
But it's hard to relate to that once you can see reality more accurately.
Classic "conspiracy theory": The term "conspiracy theory" was concocted by the CIA as a label for anyone talking about the JFK assassination. And it was never put to bed. I see the term increasing in usage over time as well.
But yeah, systemic people gonna system. Weirdly it seems they outnumber people who form their own opinions.
Prose is mostly focused on describing meaning using any words that serve to do so.
Verse is more concerned with structural factors like rhythm, tonality, and structure within syllables, or within types of sound, or parts of speech. Other linguistic devices which look at details beyond the strict meaning of the words, like rhyme or many other factors (you could even use visual spacing for example) can be considered in verse.
Within verse there's the concept of iambs. I think of it as a tuple of two syllables which are said, weak-strong. Pentameter means ten syllables, and iambic means in groups of weak and strong. Most of Shakespeare is written like this. Also English naturally sounds iambic a lot of the time.
Iambic pentameter sounds like this:
Normally you'd also look at rhyme structure if writing a legit Shakespearean sonnet [2] but I fired this one out as in the style of fast food. So this is technically iambic pentameter but not technically a sonnet.Or like a particular Shakespearean sonnet [0]. Or like any of them, [1]
[0] https://shakespeare.mit.edu/Poetry/sonnet.I.html
[1] https://shakespeare.mit.edu/Poetry/sonnets.html
[2] https://www.poetryfoundation.org/education/glossary/shakespe...