If you put code inside the K-rate tab, it will be executed 3000 times per second, at roughly equal intervals (1/3000 s = 0.333 ms). The exact time at with the code is executed (relative to the 3000 Hz K-rate 'tick') depends on where the patcher actually places your code relative to the code for all the other modules. The higher up and further to the left in the patcher GUI you place a module, the closer to the 3000 Hz K-rate 'tick' it will be executed. Most often, modules take about the same amount of time to execute each time, so your code will get called fairly regularly at 0.333 ms intervals wherever in the physical patcher GUI space it is locaed.
If you put code inside the S-rate tab, the patcher puts a 16 times loop around it, and executes the loop at the K-rate of 3000 Hz. Thus, your code will be executed 48000 times a second (16 x 3000 Hz), but the interval between each time your code is called will not be 1/48000 = 20.833 µs. Rather, every 0.333 ms, your code will be called 16 times in quick succession. This means that if you're driving, say, an external DAC directly, and want to supply data (samples) every 20.833 µs (corresponding to 48000 Hz), it won't work, as you'll get a significantly shorter sample interval during the 16 times loop, and then a significantly longer (essentially, 0.333 ms minus the time it took the loop to run) time before the next burst of 16 samples.
But let's assume that the DAC has an internal FIFO that is at least 16 samples deep, and that the DAC consumes one sample from the FIFO per sampling interval (i.e. 20.833 µs @ 48000 kHz). In that case you could fire off 16 samples from the (implied loop in the) S-rate code, and the DAC would take care of them at the proper sampling interval, operating on one sample per sampling interval from the FIFO.
The point here is that if you want to precisely fire off a hardware event at a determined and constant interval, it won't work with S-rate code. Even K-rate code is dodgy, as the exact interval depends on the execution time of all the modules preceding your module. In some cases, like driving an external shift register to increase the number of GPIOs, when the exact interval is not important, it can be perfectly fine. But if you want drive a DAC to generate a stream of samples you want a very precise interval between the sample output.
(This raises the question of how the Axoloti handles the ADC and DAC in the ADAU1961 codec on the Axoloti board. The answer is that the codec is set up to generate and receive one sample per 48000 kHz sampling interval. The samples are sent using an I2S serial communications line which communicates with an I2S transceiver in the Axoloti. The I2S transceiver is in turn configured to continually send and receive a circular buffer of 32 samples for the DAC and 32 samples for the ADC, and generate an interrupt to the Axoloti firmware every 16 samples. This is what generates the K-rate sampling interval. At every K-rate interrupt, the Axoloti framework switches which half of the 32 sample in- and output buffers which is connected to the input and output modules, respectively.
Essentially, 16 samples from the ADC are accumulated in the input buffer, at the S-rate, which at the next K-rate interrupt are made available for the input module, and processed using the active patch, which in turn writes samples to the output buffer. At the same time, 16 samples from the previous run of the patch are being sent to the DAC. So the patch is constantly working on the 16 samples from the ADC from the previous K-rate interval, and writing 16 samples which will be sent to the DAC at the next K-rate interval.
Technically, the K-rate could be the same as the S-rate. However, this would mean that all processing for a single sample would need to take place at a 20.833 µs interval rather than a 0.333 ms interval. There are two reasons why this is not practical. First of all, there is an overhead setting up inputs and outputs etc for all modules, which then occurs for each sample, and not every 16 samples. Secondly, in order to optimze the DSP code, certain parameters are assumed to be constant during a K-rate interval. The resonance parameter for a filter is a good example of this. Limiting the resonance to be constant during a K-rate interval makes it possible to optimize the DSP code for the filter, leading to shorter overall execution times. Also, many processors, although I'm not sure about the ARM Cortex-M4F in the Axoloti, have instructions which can execute several trivial operations in one cycle (SIMD instructions) which can be used to optimize DSP code. Many CPUs have caches which speed up code execution, but they become really useful first when there are (tight) loops in the code. As far as I know though, the Cortex-M4F used in the Axoloti has no cache so this is not relevant.
So all in all, running DSP code in a K-rate vs S-rate environment allows significant optimizations of the code, which one wouldn't get with code processing individual samples all the way at the S-rate.
In fact, the K-rate used in the Axoloti is rather fast. In many DSP systems it can be a factor of 64 slower than the S-rate, or even slower, to minimize overhead, at the cost of increased latency. Indeed, the Axoloti system, with its 3 kHz K-rate, as an end-to-end (analog in to analog out) latency of about 0.333 ms * 2 = 0.667 ms, i.e. significantly less than a millisecond. That's about the same time it takes to transmit two MIDI bytes, of for a sound wave in air to travel about 20 cm.
Sorry for the long winded explanation... )