I've just come across your channel and from the looks of it, I've stumbled upon a gold mine! Please don't stop making videos on PureData. I'm just into learning it and I look forward to binge watch your content and follow through. :)
@xaisthoj
Жыл бұрын
You can try adding a trainable bias parameter to the mixer input of the tanh function. TANH(W*INPUTS+BIAS) The trainable bias should let the network change the phase of the output signal hopefully. You would only need to make the bias on the single output neuron trainable.
@gilbertoagostinho
Жыл бұрын
Super interesting video, thanks for sharing it! It did make me think whether simply resetting the phase of all your [osc~] simultaneously when you change your desired result value (i.e. by sending a [0< message to each of their right inlet simultaneously) would not improve the system or even solve the "problem" you experienced altogether. When you tried desired results matching each of the three inputs (45, 61, 88 Hz), you'd expect to get weights of 0.0 for the non-matching ones, and a matching weight of the remaining one with the desired result. However, if the desired result and the input are out of phase, I don't think there will be any combinations of weights that would generate the matching signal unless in the rare cases when phases match. And I think this can be observed in your error graph: once the learning stabilises into a given weight, your error shows a sinusoidal shape which suggests phasing, as sin(x) - sin(x+k) will also be a sinusoidal function with the same frequency x but a non-zero amplitude for values of k differing from n*2pi (and the bigger the amplitude, the more out of phase the signals are). In fact, look at when your system gave a weight of -0.5 to that 45 Hz desired result: this was very likely because, by chance, your phase was close 180 degrees apart, so inverting the wave got it very close to the original; also, in that case, the amplitude of your error is very small, and you can hear a coherent 45 Hz sound coming out. This all got me thinking: if the system needs matching phases and frequencies of inputs in order to accurately recreate a desired result of a sine wave, it will not be able to accurately use some arbitrary frequencies of input to recreate a non-matching desired frequency; then perhaps an interesting take on this problem could be to feed a more complex sound but with a clear fundamental into this system, and have inputs that are in the overtone series of this fundamental, so that the system is actually trying to match the timbre, not the frequency, of the desired result.
@fraser21
Жыл бұрын
I wonder - if you always sampled exactly one cycle behind in the training data for your loss signal (making wave shape the only learned variable instead of both shape and pitch) - could this setup have better results. Neat video!
@_DRMR_
4 ай бұрын
I'm thinking, looking at your patch again: What if you use `line~` object and get better interpolation of the signal processing? compared to block based `line`
@SimonHutchinson
4 ай бұрын
Nice thought. I’m about a year departed from this experiment now, but I do remember the [line] speeds significantly changing the output. I may have tried it along the way, but, if not, going to audio rate would likely have a significant impact (though perhaps not necessarily more “correct”)
@_DRMR_
Жыл бұрын
Are you calling it a noron because the neuron is a moron? ;) Nice experiment none the less. I tried to build on your other neural network video the other month, but didn't get any useful result unfortunately :#
@SimonHutchinson
Жыл бұрын
Ha! Yes, my accent does seem to go in and out a bit, but perhaps I'll use your excuse about calling it "norons" for unsuccessful neural networks. Sorry to hear the other video didn't work. I've put together a bit of a playlist with options in different environments. Maybe there's something in there that can be expressively useful for you - kzitem.info/door/PL7w4cOVVxL6GojicLv-6sTL-l63pVSPQP
@_DRMR_
Жыл бұрын
@@SimonHutchinson Well I tried mangling it to adjust to some specific neural-network topology I saw in a different video, but in the end it all didn't make that much sense ;)
@KoenZyxYssel
Жыл бұрын
Cool fail, I went through something similar while working on my waveshaper. Writing audio waves was laggy and unpredictable so I decided to calculate each point in the array separately using regular functions. Anyway, the Nyquist-Shannon sampling theorem says you can't be sure get the right frequency by feeding in points like this. Very relevant, look it up if you don't know it.
@SimonHutchinson
Жыл бұрын
Thanks for the tip! It totally makes sense. I’ll look into things in more detail in the coming days.
Пікірлер: 12