Wednesday, February 26, 2020

S-Core Firmware 0.95 - Home bolt with flywheels turning at low speed; more robust and snappier speed-based feed control.



  • Home bolt with flywheels turning (at default low speed)

The previous selftest routine reset the bolt at startup with the flywheels stopped. Occasionally, such as in the case where a stoppage or debris is present or the bolt is not homed but a loaded mag is present, this would mash something into stationary flywheels.

Now we wait until after the flywheel drive rotation check and run the wheels at low speed while homing to spit out anything that is accidentally fed. If something is fired inadvertently during this procedure, it is barfed out at about 36fps, so this is safe - assuming you have an ESC firmware with default minimum speed, which you should.

  • Improve speed-based feed control

A series of consecutive in-range tach readings (the number of which is still a build-time setting) are now required to initiate firing. This removes most possibility of enough randomly in-range pulses being received during the timeout window to fire when a drive is not actually at speed. This might happen during a stall condition while tach pulses may be clipped by start timeouts and/or there may be motor vibration which is capable of creating in-range tach pulses.

Offset margins and such have been revised slightly and control is a good bit snappier. Some perturbing of these settings may be called for to fine-tune velocity consistency, but it shoots pretty damn nicely with my Emax equipped primary.

03-04-20 Update: I have hotfixed some configuration parameters in this release. Posted as a Google Drive version, so should be transparent.

  • Set maximum flywheel RPM back to 25,510 since 26,000 is a deleterious overspeed for the Hy-Con and resulted in a decrease in mean velocity and an increase in velocity spread.
  • Make STC settings a bit more conservative since while it got consistent velocity at 50/25, how I had it set may have been edging toward not getting max velocity in lieu of snappiness in a few cases. This does expose the SimonK mid-speed transient response bug more as a slightly more delayed followup shot at certain speeds, but that's OK.

SimonK FlyShot (digital speed command, closed-loop) - Default to safe minimum speed on boot, etc.

Code (plus a precompiled bs_nfet.hex for ACE discrete drive and Spider boards, etc.)

Now defaults to ~35,000erpm (~5000rpm with 14 pole motors, or about 36 fps root speed on a standard Hy-Con) at boot time.

Also, the safety governor (safety here being from the ESC's perspective, meaning avoiding exceeding the frequency limit of the inverter control loop) has been set back to the stock TIMING_MAX = 0x0080 (312,500erpm) in this so this is now universal to high speed applications out of the box whereas me leaving it at 0x00e0 in the last one may have "gotcha'd" someone.

/u/matthewbregg had a good suggestion in this thread about defaulting to a low speed. I had been considering this type of feature, but this thread got me thinking about safety and failsafe control a bit more in general. Thus this, and the new velocity watchdog code on the S-Core's end.

It really makes sense just in general that a closed-loop drive doesn't try to spin to the moon if you haven't informed it of what speed it should go... Makes sense, right?

Some setups (very large wheels and high-velocity multistages in particular) could even conceivably be mechanically intolerant of overspeed. With ordinary BLDC configurations, that wouldn't happen, because the battery and motor kv would have been selected to avoid that, but with closed-loop configurations that are trying to have stiff speed regulation with a lot of available torque at some operating speed, it may very well be that there is enough bus voltage that reaching hazardous speeds and blowing shit up is possible if uncontrolled. My previous idea was that one would just set TIMING_MAX to an appropriate safety governor setting which removes the need for a default low speed in that regard, but in general, flashing ESCs to configure them is a pain in the ass and it is better to make a drive firmware be universal and live-configurable over the wire.

Bregg also implemented a version requiring identical FlyShot commands to be sent twice to apply a speed update, and this is a most excellent addition for applications that do not have speed feedback and cannot selftest their flywheel speeds, like most Ultracage setups and FDLs and whatnot. These variants are also compatible with my gear by nature, but the above code doesn't have that feature because I want a higher speed update rate and smoother speed adjustment when live-adjusting speed and it is not necessary to be that overzealous with speed feedback present. In any case, just like Dshot commands, one-time FlyShot commands should always be sent about 10 times anyway as a matter of course.

Wednesday, February 19, 2020

Hy-Con Delta Cage released - Long forend, support for Emax RS2205S.

With the Turnigy V-Spec getting scarce, it is time to option more motors. I also had some requests for longer barrels and more rail estate on the T19 platform.

Bird 1 and bird 2, let me introduce you both to this stone. Hy-Con model Delta:

All STEP and mesh files

This is equipped with a new motor option, Emax RS2205S. This, like most drone-market motors, is a threaded shaft motor.

It is widely available, torquey, not too expensive, and very axially short, which removes most of the bulges/stuff sticking out annoyingly that tend to be problems for thin-packaged horizontal cage rigs when transitioning over to threaded shaft motors from bolt-pattern motors. There are just some ~2mm tall hole plug heads on the bottom of the cover. These plugs result from keeping the old school cover dimensions, no real reason or rhyme to why I did it that way.

I do have plans to multiplex things a bit more between cage variants and motor options as I test and add more, but the Gamma Major short barrel cage really just needs a clean sheet redraw and some more polishing anyway. So for now, it's Gamma/Turnigy or Delta/Emax.

The wheel, being that this is a threaded shaft motor, has some new features.

Locating keys specific to the Emax are provided to fit into the rotor notches and allow the shaft to be held with the wheel for initial torquing of the nut.

This is a rotor OD piloted wheel and the shaft hole is a large clearance on the shaft to avoid overconstraint as usual.

A slight counterbore is provided to match a raised boss around the shaft on the rotor of the Emax motor and give full surface contact.

The ring of tiny holes is a toolpath control/selective infill feature to force the slicer to generate solid plastic in the web where the clamping load is applied.

A printable washer is used under the shaft nut to account for surplus unthreaded length. The web thickness is not increased unnecessarily (Ultracage) to this end as this has no structural purpose and adds a lot of inertia that might best be made optional. (Still, with how these run, I might slice wheels for them to add some more inertia anyway in the future.)

There are no left-hand threaded versions of this motor and they come with nylon insert locknuts, so don't go looking for a "CW and CCW pair" of them.

As per my usual design approach, this is a nonventilated wheel design. Excessive motor temperatures have not been a concern whatsoever.

I am not putting up anything except 9.5mm gap wheels with this and going forward. Closed-loop speed control completely removes the entire issue of "subcritical speed = bad" - subcritical with stiff speed regulation is actually a route to world class velocity consistency. The 9.5mm wheels turned down are easy on darts and very consistent.

Rails: Self explanatory.

Overall I am satisfied with the Emax RS2205S. It delivers a slight improvement in flywheel drive response versus the old Turnigies (which on the new control gear is automatically translated into a reduced feed delay without changing any settings! So much nicer than manual tuning) and is another, and more solid/trustworthy than the Turnigy, option but I don't quite like their "personality". They are modern, and aggressive, with N52 arc magnets, and tight airgaps. With so much field flux, the cogging torque is pretty gnarly, and they feel and sound more harsh. (They don't coast as well either - adjust your driveCoastTime down a bit if you have old world controls.) It's much like 3240s vs. XP180s in the old dark DC days. Sure, the latter has more torque and is objectively better, but the former feels... Happier. Same here, which is why for further motor options I am going to include some other motors with more Turnigy-like magnetics. The Racerstar BR2207S is one I already have on hand which is a sweet amazing sounding runner, and I will also be testing their BR2406S and perhaps BR2306S as all of these are cheap and plentiful.

Sunday, February 16, 2020

S-Core new firmware features: power-on selftest, flywheel overspeed detection.

Link up front

The source is the best documentation for the details, but selftest is very much no longer a placeholder. Overview of all checks in order:

Bolt drive system

  • Verify that the bolt can be homed as determined by the limit switch within one revolution. Issues with the limit switch fail, as do a drivetrain that has locked up/jammed with FOD or become disconnected from the motor, and inverter failure or an unplugged motor (obviously).

Trigger input

  • Verify that the state of the complementary trigger inputs is a valid high/low or low/high state corresponding to a switch up or down position, rather than a high/high or low/low state corresponding to a failed switch, a short, or a bad connection.

Flywheel drive system

  • Verify at least casually, without using interrupts, that tach lines sit at a stable logic state while motors are undriven. This avoids a race condition risk that is posed by enabling external interrupts if the tach line may have become connected or coupled to a high-frequency noise source or signal that was not anticipated.
  • Verify that each flywheel drive emits tach transitions within one timeout period when throttled. This catches drives that have become unplugged, don't have power or otherwise are totally inop or absent.
  • Verify that each flywheel drive can spin its motor past a minimum speed and keep it there for a minimum number of check cycles during the timeout. This finds seized motors, and typical single-phasings and partial inverter failures that may still move the motor but can't actually drive the motor to any speed.

Closed-loop speed control integrity

  • After setting and locking down the speed prior to entering normal operation (either by defaulting to the current tournament lock setting or by exiting the user speed configuration mode), spin up flywheels and verify that, within a timeout period:
    • No single tach periods indicative of critical overspeed events occur.
    • The noise-filtered speed of each drive falls within margins of the speed it ought to be set to for many consecutive check cycles.

Combined with an ESC firmware that defaults at its own boot to a very low speed setting, this one boot-time check renders some unforeseen signalling issue, drive reset or bug causing considerably hot velocity effectively impossible.

To report the results of these self-checks when they fail, the familiar construct of "fault codes" now shows up here. These consist of a "major" and "minor" component and are organized approximately by fault category or subsystem.

The stepper motor is conveniently useful as an audio/tactile feedback device. It is used to emit a (not too loud) alarm, then blip out the fault code responsible using "growls" (familiar to anyone who has turned on an old T19): first major, then 700ms of silence, then minor. Both values are always to be greater than 1, and I prefer that they be less than 9 for brevity.

So far there are only codes for selftest failures or drive overspeed events, codes do not persist past the current boot, and all faults are also terminal and disable operation, as all of the current ones in a strict sense render operation either impossible or potentially unsafe.

The center blaster below is running this code right now and it is effective and solid.

This image was at CFDW. The lower left-hander blaster with the rail on the cage is Junior7's unit which I updated the firmware on at the event (to 0.8) and swapped out the selector switch (more on this switch later). I haven't posted that build in detail - it's a 221 with S-Core 1.0, ACE LC1s, Turnigy Vspecs, a lefty stock base, lefty auxiliary controls, plus an underbarrel rail and one very long mag release that he wanted and filaments are Atomic marble, Atomic translucent aqua and Yoyi translucent orange.

Same blaster's print kit before assembly.

Friday, February 14, 2020

S-Core SBBM v1.5

The original S-Core did its duty as a dev mule, but it's time for a "production" board for both me and other people to use.


  • Delete the solenoid driver - It is decided. Steppers are NOT going anywhere.
    • Solenoid-driven magfed applications are going to get their own, smaller, simpler board. If a coil driver is needed at the same time as the DRV8825, that can be something plugged into the GPIO connector.
  • Switch pusher motor DC link cap and logic power supply filter cap to through hole 10mmOD electrolytics - No more SMD lytic hassle. Also, cheaper, and many options of lower ESR which matters for the former.
  • Fix inductor footprint near AOZ1282 for easier soldering
  • Fix MCU ceramic resonator footprint for easier soldering
  • Put all components on front side - Does it matter? Not really... There's through-hole leads sticking out the back for all the headers and such so you can't whack it against a flat surface anyway without some standoff but it felt right to do this. When I do the mini version, this likely won't be the case.
  • Add mounting holes
  • Replace cheesy little Bourns TC33 trimpot for tournament lock with a Bourns 3362
  • Replace DC bus wire pads with wire holes plus 2 pin 0.1" connector footprint for options and better wire management in-blaster
  • Add TVS diode footprint (SMA package) for DC bus transient overvoltage protection (opt.)
  • Change BOLT connector to 2 pin
  • Change GPIO connector to 4 pin (adding pin formerly eaten by the solenoid driver)
  • Add ISP header pinout legends (Not shown in the above render)
  • Add motor drive pinout legends (Not shown yet as well)
  • Improve MCU decoupling
  • Via stitch front and back ground planes generously and improve ground connectivity/impedance to everything
  • Circle VREF test points in the silkscreen, if they needed to be any more obvious
30x75mm. 2 layer, 7/7mil, 1oz. Minimum drill 20mil (I use large vias by habit) Should be easy stuff to get made by any vendor.

Monday, February 10, 2020

S-Core firmware 0.8 and new feature overview.

This is the firmware these new blaster manager boards run.

All the important (um) core functionality is there and solid - the main bit left to work on, really, is just doing a proper power-on selftest now that we have the hardware capabilities to. Beyond that, it's mainly a matter of implementing alternative UI behaviors that I or anyone else wants.

So into all the new stuff.

- Implementation of FlyShot protocol; integration with adjustable-speed flywheel drives

The protocol described in this post is implemented.

Support on the motor controller's end is required. This may become a compile time option, but in general, avoiding ESC reflashing and setting speed over the wire from the blaster manager's end is easier and more flexible, even if you program your blaster manager to set a fixed speed only.

- Implementation of speed feedback-based feed control (=closed-loop single-trigger control. CLOSE ALL THE LOOPS!!)

"Blind" feed delays/delay scheduling approaches to account for motor acceleration with no awareness of actual wheel speeds are so last decade and really needed to go, just as much as open-loop voltage command flywheel drives needed to go. In particular:

  • Scheduled delays require tedious manual tuning to closely match the dynamics of a given flywheel/drive system and produce snappy shots, but consistent velocity, under a variety of conditions. Oftentimes they get just "Meh, good enough!"-ed to the generous side by a builder who can't be bothered squeaking every millisecond of latency and every fps out of the damn thing - including me.
  • Scheduled delays cannot, obviously, react whatsoever to unanticipated variances. This could be anything from a change in DC bus voltage and sag due to batteries not being ideal and varying in voltage and IR with SOC and temperature, to a barely noticeable desync event during a spinup that slows it down slightly, to some debris stuck in a cage or cold grease in a bearing causing friction on a wheel, to an outright jam or other situation where a wheel fails to turn at all.
    • Thus, delay settings must be conservative to cover the vast majority of these variances.
    • Also, thus, scheduled delays will completely fail at their job in more severely unexpected situations - by feeding ammo when it would produce a crappy shot, or by feeding ammo when it is completely inappropriate to feed and causes or worsens a stoppage!
To this end, tach signal (the format generated by SimonK MOTOR_DEBUG is what is expected here) from each motor drive channel is now used to control feeding based on actual motor speeds.

This not only greatly improves robustness against unexpected conditions, but renders the system 95% self-tuning, since the whole issue of predicting drive dynamics with math and/or multi-stage delays is fundamentally sidestepped. Plug in ANY motor, ANY controller tune, ANY flywheel inertia, ANY drag torque (cooling impellers and whatnot) - Doesn't matter one bit (well, as long as the drive is capable of reaching that speed at all). The feed delay optimizes itself on the fly for each spinup.

The main control logic is also extremely simple. Simpler than scheduled delays.

There is only one set of parameters left to adjust - a speed offset margin, which is a margin of error within which the speed is considered to have reached the setpoint. This accounts for achievable flywheel speed control loop performance, and it also provides a means of compensating for continuing wheel acceleration within the mechanical travel time of the bolt; while still reliably inhibiting feeding if the speed is out of range enough to produce a tangibly wonky shot, let alone is grossly unsafe to feed at. In this version, a linear (which is perhaps not the correct curve to apply, but whatever, works well enough) interpolation is provided between endpoints based on the current speed setpoint - for which the idea is to account for the greater torque and thus acceleration available from a real drive at lower speeds.

- Implementation of flywheel speed limit (tournament lock) with secured board-mounted potentiometer (i.e. requires tools to access)

This is self-explanatory.

- Implementation of user analog potentiometer and selector input devices

This is self-explanatory. See selective fire discussion below.

- Implementation of user speed configuration from minimum up to tournament lock setting

At boot time, if the trigger is not down, the blaster defaults to the tournament lock setting.

If the trigger is down, the blaster enters speed configuration mode. The flywheels spin continously for audible user feedback of the speed, while the analog knob varies speed between minimum speed and the tournament lock setting. Releasing the trigger enters normal operation at the current speed setpoint until the next power-up. This avoids accidental speed changes.

- Implementation of selective fire

Can you hear the crackling of ice forming? Because hell might be freezing over.

Yes, I put a selector on a T19.

This was motivated by a few factors:

  • Popular demand. T19 has started to gain traction among locals as a platform - some of them are not full auto natives.
  • Future flexibility of having the control itself - we aren't just limited to selecting burst modes! What if we want to select profiles that also contain speed, ROF, and offset margin ("snappiness level") combinations for various situations? But there is a lot more than that under software control. Maybe we could have a HvZ stealth mode that sets a low wheel speed, limits the voltage command of flywheel motors to hush the magnetostriction under high motor current on startup, sets a moderate ROF so the bolt doesn't go clack when it cycles, and sets 1/16 microstepping to quieten the bolt motor? Well that can be done now. If I have a need for something like that, I now have enough UI means available to turn it on and off. That's a way of justifying what I long saw as a superfluous control.

Straightforward - the logic is like a firearm's disconnector implemented in software and has the same behaviors as such a mechanical device. There is no paintball-style shot buffering/queuing mechanic, and I don't personally think there ought to be, although if any game organizer starts to push semi-auto-only rules I am totally open to implementing that along with 2 finger triggers!

Something to note is that I still have true full auto. Full auto is NOT a 99 or 999 shot burst (FDL, for instance). For each selector setting, there is an index in both a boolean isBursts and an integer bursts array. The first one is a mask for the disconnector and if that index is 0, the burst disconnecting logic is completely out of gear and it will fire truly ad infinitum. Does that practically matter? No, not at all.

Stock modes are full auto, 2 round burst, semi. I get a lot of demand and a lot of positive feedback on the 2 round burst.

- Implementation of live ROF adjustment

During normal operation, the analog knob sets ROF between the configured minimum and maximum at compile time.

ROF adjustment is linear, ROF adjustments take effect immediately at any time during the idle state (not firing), and ROF range endpoints are now configured in RPM rather than obtusely in microseconds per subcommutation. There is still a strict reliability limit to be set per-motor (88uS for the usual OSM 17HS16-2004S1). If you want to run it up to there, just set the maxROF a little beyond there.

ROF is a maximum bolt travel speed adjustment, and applies to all fire control modes at all times. Thus, it allows both changing the cyclic rate of fire, including for bursts, and changing the bolt speed and force (inversely related) - not that crappy ammo usually poses any problem in the field even at max ROF settings.

Sunday, February 9, 2020

S-Core 1.0, a somewhat-preliminary single-board blaster manager.

The Google Drive directory

This is my first crack at getting rid of Arduinos, perfboards, DRV8825 carrier boards, hand wiring and off the shelf 5V converter modules and replacing all that noise with a single PCB. Done around Aug-2019 and since then I have been running a couple of these.

This has an Atmel ATmega328P MCU, an AOZ1282 5V supply with the input filter from a recent post, an onboard tournament-lock (speed limit) trimpot, and inputs for a SPDT center-off selector, an analog pot knob, the bolt limit switch and the SPDT trigger like any other T19 and all of these contact input lines have the usual proven 1k pullup to vcc and 100k protection resistor going into the MCU pin. But the elephant in the room is the TI DRV8825 stepper driver and all its supporting componentry including a Vref trimpot, Vref testpoints, DC link cap, and current shunts. There is also an open-drain "single ender" solenoid driver that takes a DPAK (designed at the same time as the ACE LC1...) mosfet on this board - though I haven't used that for anything I would put this in yet. 2 layer 1oz copper, pretty undemanding stuff.


BOM (generated by Digikey)

Firmware v0.8

A Reddit thread about this

These are solid and do what they ought to. They have great thermal performance for the DRV8825 and run it cooler than Pololu boards do.

Misgivings are numerous!

  • I used a nonstandard motor drive signal connector pinout with 5 pins. Back when I designed this, I was not settled on where logic power supplies were even going to be physically located in a blaster - so these are: ground, NC (where BEC output is on old RC ESCs), throttle, tach, 5V (so far always NC on ESCs). Of course only 3 are necessary. I have settled on 3 pin with ground, tach, throttle as a pinout now after modding a few Spiders for tach with cables wired that way.
  • The ceramic reso and AOZ1282 inductor footprints are default ones too small and a massive pain to solder. The reso's alright, but the inductor is a bitch. Ok for reflow, but dumb layout for a hand-solderer. Needs a bit more area on the pad edge to heat from and it's OK. Lesson learned.
  • I discovered machined-pin headers: 2 pin connectors are secure if you just use good headers and not cheap ebay ones for the female side. The bolt limit switch can be a 2 pin.
  • There are no mounting holes since I opted to have a very fast cheap to print board bracket in the 19 instead. But for futurestuff that might be slimmer than a 19, I want either holes in the board or provision for slide-in mounting.
  • SMD electrolytics - again. Another pain in the ass. Lesson learned.
  • Looking back, my MCU decoupling was a bit meh (there's a via in one of them and the other has some trace length), but better than most commercial stuff and these never reset or glitch out.
  • The solenoid driver just needs to GTFO. There should be ONE OF a noid driver OR a stepper driver OR a third throttle channel on a given board. I actually put that in because I was expecting to use these in a HIR project where there would be a small noid for the feed gate (like in a Zeus or my old hopper loaded thing) and the DRV8825 would be running a feed roller motor. But that's probably best left off, and made external/piecemeal for such a very special rare case. It's deadweight in a 19 and in a solenoid-driven blaster, the 8825 is deadweight. Those apps need separate boards. Plus I think I would use a halfbridge in a dedicated hi-po solenoid drive board today instead of a single-ended powerstage so that decay mode could be fast if that is worth anything
  • Layout could be tighter.
  • Those Bourns TC33 trimpots. These are same the tiny, tiny, stamped sheetmetal/ceramic, things found on DRV8825 carrier/Stepstick boards for Vref. Small footprint, not hard to solder. Also found on Narfduino Brushless Micro (etc.). The one for Vref on the 8825 is fine - you use it once in a blue moon to set motor current just like the one on a stepstick carrier. But the one for the tournament lock - That really needs to be something larger, beefier, easier to turn with a small screwdriver and yet difficult to get accidentally turned from things brushing against it, accurate, and hard to break or contaminate. Like... a regular blue through hole Bourns trimpot.

Time for a brief component shopping session and a re-lay.

Other than that, MCU pinouts and hardware all good, works great, no problems. Running one of these in this:

"Crystal Patriot": Yoyi translucent red PETG, Yoyi clear (translucent whitish in a thick part) PETG, Yoyi translucent orange PETG for the flash hider and auxiliary controls, Inland/Esun trans. cobalt blue PETG.

Of course to go along with this there is a selector and an analog knob added as controls on the blaster. I put up the modded grip base and the knobs as well as the S-Core board bracket (glue the board in). This is the potentiometer and the rotary switch used (wire as SPDT center off). Can't beef with either, particularly the rotary switch, which gives a very solid and satisfyingly clicky selector.

Saturday, February 8, 2020

Another in process ESC project with Infineon 6EDL04 gate driver.

This is still more under construction than the LC 2, but it's in the pipe as well for the next big PCB batch. Working name ACE-NX.

LFPAK56 power stage. 25x46mm 2 layer board, same dimensions and a lot of structural similarities to the LC 2.

Infineon 6EDL04 gate driver (6EDL04N02PRXUMA1) - Datasheet. Cool, huh? The overcurrent trip will be an interesting feature to hook up and try out sometime later as well, could help to increase the bulletproofness further, but this one doesn't, the idea here is straightforward and simple.

Power supply is usual trusty ST L5150BN LDO for the 5V and AP3012 boost off the 5V rail for the +12V gate drive supply. The SOT-223 linear and then an AP3012 feeding the gate driver is a standard setup to get a stable gate drive rail, a lot of BLHeli_Whatever things with Fortior drivers have something to that effect.

I haven't seen anyone else make use of these Infineon EiceDriver chips yet. I have seen plenty of IR2101s and similar, a few FAN7888 projects on RCgroups, and a ton of Chinese drone stuff with Fortior FD6288. The 6EDL04 caught my attention for being a smallish TSSOP package and having integrated bootstrap diodes.

The Fairchild FAN7888 and Fortior FD6288 are also among the candidates leading up to me picking the Infineon, but both require external diodes, the Fortior is pure Chinesium (I literally cannot find a datasheet that isn't in Chinese) and not the easiest to get shipped quickly and reliably in the US and the Fairchild is a large SOIC package. There was a promising MPS MP6531 found by RCgroups user AlkaM which goes a step further and includes its own buck/boost gate drive supply onboard, but that one has availability issues and is also an exposed-pad package which is alright but a tad annoying.

The Infineon and AP3012 almost package better than the discrete drive setup! Killed quite a few components and traces. If I wind up liking the results of these, discrete drive may be on the way out of my designs.

Pinouts on this thing are bs_nfet except that obviously, the high side drive is NOT inverted like it is with the discrete drivers. A board definition is going to need to be created for that, probably called ace.hex. Perhaps later on it will be prudent to swap some pins around in the board def for better routing between ICs if the 6EDL04 setup works nicely.

This will obviously be able to run some extremely beefy mosfets that have larger gate charges with the added gate drive grunt. PSMN0R9 is what I have in mind.

More on discrete gate drive ESC boards; logic power filtering; a 2 channel drive project.

This is something I have cooking:

A 2 channel controller with some specific layout/wiring features. This is targeted at restoring SimonK availability to the FDL-3 (including closed loop adjustable speed developments and whatnot, if desired) and least providing some insurance that they don't HAVE to get their ESCs yanked out from under them again in the future - even if most people keep using COTS BLHeli_32 stuff in their setups for the time being. This version is a discrete drive unit, with LFPAK56 devices.

You can also see the SMPS down there on the right; it's an Alpha/Omega AOZ1282 1.2A output, 36V max input buck which I have found to be a cold running, solid little setup on my S-Core boards. That ought to handle all the logic power requirements (all of which are 5V) in the blaster with some current to spare and run cool enough.

Speaking of the devil logic power supplies in blaster systems...

That has always been a point of intense worry of mine, having had some bad experiences early on (and who HASN'T had a frustrating random-reset gremlin at some point?) and although the culprits were a number of other bad things I was doing and stock ESC firmwares were doing at the time, that led me to insist on SEPIC/buck-boost regulators to power blaster managers for the longest time - the idea being that during some horrible bus sag event or negative noise spike, you have a tad more input voltage margin. What led me to reconsider was realizing a few key things:

  • Most SEPIC regulators, including ones I used with success, can't actually run down to less than 4.5V or 4.0V anyway. It's a big reg selection, footprint, and cost constraint over... what, half a volt? A volt? A volt and a half of margin gained, absolute max.
  • Holy hell, if the DC bus in something DOES ever have more than -10 volts of peak noise, there is a Very Serious Problem going on, and other things like mosfets exposed to that are probably in danger. If there is more than -10 volts of SAG on a longer timescale, that's also a battery that is completely and utterly incapable of powering the load (or maybe mistuned inverters trying to commit suicide with a kabillion amps).
  • My ESCs are running on linear LDO regulators without that filter with good success. Never a random reset.
But once burned, twice cautious. Hence this circuit has made it into a few designs, including the S-Core and the current state of the above board:

Note D7 and C16 on the input - this is a negative spike filter of the sort recommended in the datasheet for my favorite LDO linear, the ST L5150BN (which is an automotive-targeted part). The big Schottky diode prevents any negative pulses on the input from sucking charge out of the cap, and the cap caches a bit of sag ride-through energy to use during such a transient. There is a bit of forward drop on the diode but it's a Schottky so not much.

Is it necessary? No. I can say that pretty solidly - all my ESCs, including ones I designed, don't have any such filter. Good decoupling on your MCUs, big enough 5V rail bulk caps, and making sure all motor drives have the proper DC link capacitors fitted does the trick so well that not even an ancient and empty 3S pack causes any resets. But it's peace of mind if this filter fits, and it costs like $2.50. Watch out for inrush current into the cap. Beef the diode, or else do something to mitigate the surge when switching this on. That's why the diode shown is a beefy SMA part rated for Ifsm of 100A (which appears to be enough).

Something interesting is that this circuit is effectively a high-powered version of a peak detector. Could that make a DC bus overvoltage transient more dangerous? Maybe. But momentary overvoltage spikes, if they are present, can still kill a switchmode (or any) reg anyway, so input rating headroom is important (as is trying to avoid noising up your DC bus in the first place). The AOZ1282 is rated for 36V and my go-to linear is 40V. Worst-case worrying me wants to whack a TVS diode on the input of the reg. That's what I did on the Twinverter board where that filter feeds gate drive too - or at least the footprint is there if you are concerned enough to use it. Same with the main diode in fact - that can be a wire instead if you don't want it.

Back to some gate drive matters:

Obsessive me gets to wondering about stuff like...

What is the actual high-side gate voltage level?

Looks like it might be "About what you get when you charge 1uF to the DC bus voltage minus a diode drop and a Vcesat and then load that with a gate capacitance".

Indeed it CAN be that under certain circumstances - namely, if the phase is (or is being driven) low just prior to turning the high-side switch on. This might be occurring often if you are using complementary PWM in which during (assuming HIGH_SIDE_PWM = 0) the off-time, the high-side switch is on.

But in plenty of situations, the phase would be near neutral before getting driven high by turning on the high-side switch, and that case the cap might not be charged to the full voltage.

The bootstrap cap C4...6 and the ballast resistor R11...13 form a RC network. The time constant (time required for about 63% change in voltage after a sudden step) would be of the order of 2ms (2.2 for common boards, 1.6 for mine) with the usual R and C. Given that, at high motor speeds where an entire cycle of the AC waveform could take 0.3ms and thus 0.1ms between high-side drive events and 0.056ms between commutation steps, it's gonna be pretty close to DC bus voltage - but at low speeds, where a single commutation might be 8ms or more (this is one of the main V/Hz inflection points in SimonK current limiting settings), the bootstrap voltage could have time to fall a bit and become more like half DC bus voltage - since the phase is no longer fully low, it's at near neutral.


Well, not quite... The "neutral point voltage" is a concept that arises out of the AC polyphase nature of the inverter section and motor relative to the "one-sided" (one leg considered the ground reference for logic and gate driving purposes) DC bus. Neutral, and the "Zero" that the phase voltage is crossing and we're sensing (through dividers to put in safe 0-5V range) to know the rotor position, is NOT just half DC bus!! It's half the PHASE voltage. Which is half DC bus at 100% voltage command, but is MUCH less (sixth, quarter, half...) than DC bus voltage at low speed given that your V/Hz (POWER_RANGEx) settings are sane, and are not trying to do high duty cycle into a basically-stopped motor (which is distinctly bad anyway).

The RC filter effect on the bootstrap would just help to make that more true and make the voltage on the cap constantish by averaging out the PWM waveform on the phase voltage which is at 5-20+ kHz per settings.

So the voltage across the cap (bus minus neutral, roughly) is bound to be higher than half-bus in practice at low speed. And by the time a drive is at a speed range where it can safely open up at 100% duty and put the neutral to half-bus level (stock SimonK: 1ms/commutation, my tune: 0.8ms), it's already about to be less than one time constant and so perhaps 70% of bus is more appropriate a guess than 50%. That would have plenty of margin for 4.5V design mosfets.

But wait there's more

Remember when I mentioned that discrete drivers don't have UVLO? Well, they DO have UVLO assuming you keep one bit of the original "Hobbytroller" design tenets I find rather clever.

The logic power is normally derived by a ~0.5V dropout LDO or switchmode buck (not -boost) regulator from the DC bus voltage (which of course charges the bootstraps up). The AVR has its brownout detector enabled and set to hold the chip on reset at about 4.0V (as operation below here at 16MHz is out of spec and risks crashing).

Bang. There's your undervoltage lockout. For ALL the gate drivers.

Just use mosfets spec'd for 4.5V gate drive levels.

Plenty of considerations going on with these "crude" "janky" "cheapo" gate drive circuits, huh? They're clever, brutal, elegant and awesome. Dronepeople might want to bash them, but they're actually far more well thought through than I realized early on. ("Can anyone show us a discrete drive board that failed in a manner that a driver-equipped one wouldn't have?" Crickets, most likely.)

Now there's another gotcha. You don't want to do anything (such as, um, wiring it to an external 5V supply... like I did back several years in an old project) that prevents critical (<4.5 ish V) DC bus dips from disabling the AVR - because then you just disabled your UVLO. IF you want to do something to stabilize, filter or de-sag logic power, you must ALSO stabilize the gate drive rail feeding the bootstrap diodes AND there must be a BUCK regulator feeding the MCU off that so that its brownout detect trips if the gate drive collapses. See the Twinverter? Look where the gate drive rail comes from. Yep - The input filter.

Bootstrap diodes. Hmmmm....

Most ESC boards have 1N4148 or similar for the bootstraps, whether discrete or with gate drivers that require external diodes (most of them including the now popular Fortior FD6288 and the old school IR2101). Often, the 4148 is or was a distinctive glass-passivated MiniMELF style that you will see on ZTW boards. On Afros, it's a damn tiny SOD-323 or -523 or something (looks like an 0603 scale thing). On my ACE LC1, I have spec'd a 4148 as well. Is there a better choice?

Well, first off, they need to be fast enough to switch on and recharge the cap effectively at high drive frequencies, so not some slow clumsy rectifier. And they must have as little reverse leakage as possible, since there is NO supply besides the cap to support the gate voltage during an on-time. The 1N4x48 series cover those pretty well.

What the hell kind of pulsed current do these need to withstand? There's a circa 1uF ceramic capacitor that it's feeding. Seems like a potential inrush issue. That's my main worry point.

At switch-on, the phase nodes are not clamped low - the low side switches are off and the phases floating. The phases might be pinned high if the bootstraps were still charged from the last power-up, but once the MCU turns on they will shut off. At this point, there is a current path for charging the cap fully which is through the sense voltage divider resistors, which are pulling the phases down to ground. That 18k + 3.3k is a nice soft-start precharge for the caps. By the time we're trying to drive, they have long ago filled up (3 time constants are ~60ms). So startup from cold, that's OK.

While running would be a more aggressive case perhaps:
  • Phase driven high. Cap, gate and ballast node sitting above DC bus by however much. Transistor off. Diode blocking gate/cap voltage against returning to the DC bus.
  • High-side signal goes back high (=off). Transistor turns on. Gate plummets and the mosfet turns off.
  • Cap voltage does NOT plummet with the gate. The ballast resistor is between the cap and gate and is too high impedance for it to discharge during the switch-off time so it only drains a little.
  • As the phase node swings down toward neutral, one end of the cap also does, eventually as the phase is driven low the other end of the cap that had charged the gate crosses below the DC bus level and the diode starts conducting, charging the cap.
So it's however much charge bled off from the gate leakage, the diode leakage, the leakage through 1.6K resistor during the switch-off time of the fet which is pretty snappy (few hundred ns ish?) that is being "refilled" at a rate controlled by the slew rate of the phase voltage. I need myself a scope right about now to get numbers off real boards...

4148s appear to never die here and they are in many gate driver datasheets as a bootstrap diode with ~1uF bootstrap caps. But I am going to spec 1N4448 going forward. Unlike 4148s which can be EITHER 2A or 4A pulse rated per manufacturer, 4448 are always 4A. There are both SOD-123 and MiniMELF versions of 4448 to suit your preference and they are cheaper than dirt just like 4148s. This is probably a "Don't fix what ain't broken" situation. Most Schottkys are either higher leakage or have equivalent or worse pulse current ratings.

Friday, February 7, 2020

WIP: ACE LC 2.0 - A more refined discrete drive ESC. Plus an elaboration on discrete gate drive.

The LC 1.0 is fine, but I don't think it is quite the Afro 20FS/30 replacement that we need. The wire routing is not the easy and tidy affair it should be, the mosfets are old tech, the long skinny form factor is not the most universal and there are many layout lessons learned and so forth. Before I get sidetracked, I want to make sure what I started is finished here.

Goals were:
  • Use more modern mosfets with lower Rds(on) and higher ratings.
  • Improve all high current buswork and general beef level.
  • Have an Afro 20A FS/Afro 30A FS/ZTW Spider 30 like form factor.
  • Have motor phase pads at one end. DC bus and signal pads at other.
  • Fix resonator footprint for easier soldering.
  • Keep everything that worked well - sense network, gate drive, logic power, etc.

After several revisions, this is the result:


  • ATmega8 MCU
  • Discrete drive (of course)
  • 18k/3.3k/18k sense network
  • LFPAK56 power stage. This can take any number of Nexperia devices, but the prime candidates for these discrete driven boards are PSMN1R4 and PSMNR70.
  • 25x46mm
  • 2 layer 2oz 10/10mil
  • Hand solderable

If you have been following this project when I was posting updates on reddit (before realizing that, you know, I should use my blog and not blog on forums) this is one more revision yet from the one posted last. Changes:

  • Improve MCU decoupling
    • Some decoupling caps had been elbowed out of the way from their rightful place adjacent to the MCU Vcc/Gnd pins at the layout stage. The other stuff in the way was moved, and the caps relocated to the proper position, removing some trace length and loop area.
    • Whack one more decoupling cap (C9) on there for good measure, for a total of 4.
  • Add polarity markings to the C8 footprint. This is the tantalum 5V rail bulk capacitor.
  • Increase the number of vias in the low-side DC bus going up to the sources of mosfets up top, and revise the via pattern a bit.
  • Put soldermask cutouts over those same via farms on both sides. The vias can optionally be solder-filled for better thermal and electrical performance.
  • Run a soldermask cutout down the outside edge of the low-side DC bus. This can be solder tinned or even have a wire or bar soldered on to decrease bus resistance further. Low side is where any voltage drop really counts and is a risk.
  • Move some traces to avoid risky low clearance to the board edge.
It's pretty much ready to go, next update will likely be boards in hand, hopefully soon.

So, this is a good opportunity to demystify discrete gate drive. Time for a schematic excerpt:

AH, BH and CH are (active-low) gate signal inputs from the MCU, PHASE_A through _C are obvious, GATE_AH through _CH are obvious, and DC_BUS is obvious.

This is the very same high-side gate drive arrangement used in every good old lower-voltage hobby ESC way back to the dawn of time and, more or less, right up until dronestuff started using Silabs EFM8 and ARM MCUs. The only difference is that generally R11, R12, R13 is 2.2K - the 1.6K (1.54K as shown) is my optimization, taking into account the power ratings of resistors.

This is a bootstrap circuit similar to what most halfbridge or three-phase gate driver ICs also use to do the same thing i.e. generate the offset voltage above the DC bus to turn on high-side N-channel mosfets or IGBTs. The capacitors (C4...6) are charged through the bootstrap diodes (D1...3) whenever the phase node is low. The voltage across this cap, referenced to the phase node, stays in the same place relative to the phase node when the phase node later swings up, and it is this voltage that lifts the gate above the source (phase node) on the high side device. Of course most gate driver ICs use a totem pole/push-pull power stage (similar to a tiny inverter phase leg, or an AVR pin driver) to more stiffly drive the gate in either direction instead of the single NPN transistor and ballast resistor, but it is largely similar.

How this works is that in the off-state (xH logic HIGH and thus driving the transistor at saturation through the base resistor) the node with the mosfet gate on the left end of the ballast resistor (R11...13) is pulled down to very near ground (actually Vcesat of the transistor) keeping the gate locked off. The bootstrap capacitor C4...6 is charged through the bootstrap diode D1...3. It is the ballast resistor that drops the full DC bus voltage minus the diode drop and Vcesat and establishes that voltage across the cap. To turn the gate on, the input is driven low or removed, the transistor turns off, and suddenly you have: phase node (high side source), charged cap, ballast resistor, gate resistor and gate. The voltage on that node with the left end of the ballast resistor, and the gate, now flies up to whatever is on the cap and feeds the gate.

Big obvious shortcoming of this simple circuit surrounds the ballast resistor. It is in the current path for charging the gate, so for a high drive current and fast switching time, you want to minimize it - but the lower it is, the more idle current it draws off the DC bus and the more power it must be rated to dissipate. Usually this ends up with fairly weak drive strength, which is why this circuit is not the greatest idea outside of relatively low gate charge mosfets, say <90nC.

Other, possibly major, shortcomings:

* There is no explicit undervoltage lockout for the voltage on the caps to prevent operation with critically low gate drive level. "Desaturation" events where a switch is partially-on and thus dropping a ton of power and violating its SOA due to insufficient gate drive are a major cause of inverters going bang, whatever the cause of the low voltage.

* The DC bus voltage is limited by the use of up to raw DC bus voltage minus a tad, to drive gates. This is why putting 6S on one of these makes me very nervous.

* There is no hardware-enforced minimum dead time or shoot-through prevention.

Gate driver ICs offer these features and rectify these failure modes/risks along with generally achieving better switching times and/or driving large gate charges with more drive current. But discrete drive boards are not often trouble sources if operated correctly. I have never had one blow up due to motor abuse or any apparent gate drive fault. I have only ever popped one board, and that was by straight 14.8V into a MCU.

The low-side driver in these arrangements is just the AVR pin driver (with a pulldown resistor for safety). The AVR pin driver is pretty beefy and can source/sink ~40mA and is 5V of course. But this does identify a factor in the prevalence of gate driver ICs in modern drone motor controllers - because these are by chance all using 3.3V chips that don't have the voltage to drive even logic-level mosfets competently and often don't have the current either.

Closed-loop adjustable speed drive for flywheel blasters; new digital signal protocol and SimonK variant.

SimonK code for the impatient

Closed-loop speed control of flywheel drives is, if you ask me, an indispensable and critical feature and a huge step above BLDC operation. I would never want to design or play without it. However, the usual sort of drive that offers this in a form that actually performs well for flywheels is SimonK with its stock safety governor reset to a desired speed. The main shortcoming with that is that the governor was never originally meant to be used as a service control and had to be set at compile time.

In the old days, this would be more than fine - having a blaster that you can reflash to change velocity certainly beats a blaster where you have to change hard parts to change velocity, and hell, much of the world is still living in that era with DC motors and Stryfoid hosts. Also, a specific reason to not implement adjustable speed is that drives with hardcoded speed limits tend to make game organizers happy by thwarting cheating and preventing blaster manager firmware shenanigans and battery shenanigans from being able to affect safety.

However, there is a lot of interest in convenient user-adjustable speed and in the system programmer being able to control speed from the blaster manager's end along with everything else. This has kept a lot of brushless adopters using open-loop voltage command instead of closed-loop speed command in their work. That, is unfortunate.

Fixing this was the subject of several experiments of mine:
  • A SimonK variant which straightforwardly used the stock throttle signal input code to operate the governor.
  • A venture into using external control loops on the blaster manager.
  • A SimonK variant which uses a digital signal protocol to configure the governor.
The last is obviously the success.

Why using analog PWM throttle signal is not the answer

Motor drives in hobby space have long accepted the "1-2ms" analog PWM protocol, which encodes a command signal as a pulsewidth, nominally between 1000us and 2000us for idle and floored respectively (or perhaps bidirectional, centered on 1500us, for reversible drives) with a nominal carrier frequency 50Hz. This set of timing parameters arises from how the controllers of early analog RC servos worked and how early RC receivers multiplexed radio commands to multiple servos prior to the use of digital radio gear and high-power motor drives in those hobbies (i.e. when they were dominated by combustion engines and crude mechanical motor controls, probably before you were born).

Since then, those protocols have received a few updates. The first has been that the carrier frequency is allowed to be much higher than the painfully slow 50Hz. SimonK, for instance, has a quite general pulse capture handler and allows any frequency that it is possible to fit the configured maximum pulsewidth into, so with a full throttle pulse length of a little under 2ms, a usual nominal value is 490Hz for the maximum update rate of the 2ms protocol. Following on from that, several standardized-ish "protocols" have been defined that use shorter ranges of pulsewidths. These are often called OneShotnnn. By using shorter pulses, they allow higher carrier frequencies and accordingly allow a drive to have more control bandwidth - which is why multicopters, which have offboard control loops steering the drives which continue to improve in computational performance, have been the main field pushing the use of faster throttle protocols. SimonK has preconfigured support for OneShot125, the most common, using a 125-250us pulsewidth range.

Most recently, the drone world has ditched them completely in favor of digital protocols. The standard here is called "DShot". You can go read all about it elsewhere - but the matter of going digital is not (just) about faster signals and more throttle bandwidth, necessarily. Rather, it is a matter of removing error inherent in using a relatively short pulsewidth to encode a value that might be expected to have 10-bit or higher resolution. The oscillators on our sending and receiving MCUs aren't perfect and so these protocols are always nondeterministic and plagued with drift and jitter and other artifacts.

Another wrench thrown into the matter is that often, a drive itself - powered by a simple and cheap MCU - has limited resources while managing a motor at high speed, and there is inevitably a 1/x delinearization that needs to take place somewhere between a speed command (fundamental motor frequency) and a period (1/f), which is what motor-aligned code actually works with. This is, in short, why that old analog variable-speed mod was junked - the AVR MCUs it runs on make the delinearization math too computationally expensive to do on the drive's end while possibly spinning a motor - and doing the delin on the other end results in the analog signal's error margin having an exponential impact on the speed command. This is an ender problem.

Overall though, regardless of workarounds that could be used for that, what we want if we are going to configure speed setpoints over the wire in blasters is definitely not an analog pulsewidth signal. Too imprecise, too much trouble, we know better now.

What about offboard control loops for flywheel speeds?

This is another direction that comes up.

In short, it is an assumption that a motor drive itself would contain the main control loop for an entire mechatronic system, and perhaps not even expected.

Look at multicopter drones - they use a flight controller, which is in the same role as the blaster manager in a software-defined AC-driven blaster system, to take operator and sensor inputs, do all vehicle dynamics computations and output the resulting throttle values to multiple motor drives.

Look at industrial automation - a servo drive for a robot or CNC axis or automated assembly line gizmo is probably taking a torque (current) command. The dynamics math for what is being moved is then done by "something else".

We do have access to speed signals from the drive, so why not just bang up a PI or PID loop for each wheel and call it good?

Well, I tried that. It works (of course). But like most PID controllers for various processes, heating and motion systems that you may know, it requires a lot of tuning to perform acceptably. The control loop parameters end up being specific to a certain drive system and load. Set them wrong and you're going to have your drive overshoot, oscillate, or have shitty transient response. What worked on one rotating assembly would be completely clobbered by a different motor or wheel.

A lot of this comes from latencies and bottlenecks in the offboard control loop approach, including the throttle signal protocol as well as the tach signal being only 1 pulse per electrical rotation.

Then, resource contention again becomes a challenge; this time on the blaster manager, which is itself a motor drive for the feed system in my case and is now tasked with keeping track of a 100-pole motor turning at 1000rpm while running multiple control loops. Bit of a headache, and a bigger hammer (such as moving to a fancy, expensive MCU) is not an ideal answer.

SimonK's magic governor

So what is the deal with the SimonK safety governor? What sort of controller IS it, and why does it work so well?

It is a brutal approach if nothing else - on every timing update, if the setpoint speed is exceeded, the voltage command limit is sliced in half with a LSR/ROR operation. The voltage command limit then "more slowly" ramps back up toward maximum (filtered by the throttle setting) via the same logic which controls the voltage apply rate from idle. It isn't bang/bang or PWM because it isn't 1 bit, but nevertheless it involves dithering between cruder steps to create some intermediate value - at steady-state, these two bits of code are swatting the voltage command back and forth about whatever equilibrium it actually ought to reach rather than it ever settling precisely there. Takes effectively no math to implement so is fast and lightweight, and importantly, is directly part of the basic inverter control itself and doesn't have any latencies that aren't also latencies in controlling the inverter/motor. There is a better control-theory-based way to abstractly explain this, but it doesn't overshoot - and doesn't need damping to not overshoot, and doesn't go haywire from having an implicitly rather high P gain, and thus doesn't need any integral effect to compensate for lack of P gain - because it is exactly as fast as the commanded system.

It's a sort of case you would wish for that can make a simple control loop work well. It's like some idealized heater control situation where the thermistor can be directly at the hottest part of the heater, has no thermal mass of its own, the thermal conductivity to the functional parts is infinite, and there is no lag, and the thermistor is monitored and control calculated on every single PWM cycle of the power stage. You can just use a P control with a gain of one metric shitload and be fine. That's why it's magic.

Put the Simon governor on a flywheel assembly, ANY flywheel assembly, any motor, any inertia, any drag torque/windage load, and apply any operating load, and you get pretty damn crisp speed control. Not absolutely perfect, mind you (you might get a tiny startup overshoot at certain speeds for instance), but hell, it's WAY more than good enough for government work. You can hear how well it works every time players slow down video of T19s firing.

...working name: FlyShot

So the way forward is laid before us. Now all we need is a digital signal protocol.

Ideally, we would want something quite fast in data rate, and ideally, we would also want error detection, like DShot has, but it's not like it is necessary to program the governor setting with any speed - it is, or can be, a configuration operation.

Do we USE DShot itself? Well, canonical DShot doesn't carry a big enough payload for full-resolution governor updates in one frame. Also, DShot's typical timing parameters are too fast thus WAY too noise-vulnerable for my taste.

The protocol should try to be backwards-compatible, and the motor enable signal should be simple to generate on all platforms with a basic PWM peripheral - which means that it is NOT itself a digital packet that needs to be sent constantly just to spin motors and keep them on, but a single pulse or a constant logic level or something of that nature.

What I came up with as a quick proof of concept, but have found quite robust, is a 4-level protocol that encodes digital packets using a train of consecutive sub-750us pulses (nominal T0H of 100us, T1H of 400us and TL at least 500us between). The 1000-2000us "normal PWM" pulse range is treated as a binary enable/disable command, but it could also be an analog voltage command if anyone wanted and the throttle code was stuck back in place. Packets are 16-bit, sent MSB first, consisting of a mandatory leading 1 and a 15-bit governor value, preceded and terminated by throttle-range pulses. Bits sent beyond 16 left-shift all previous data off the end of the 16 bit buffer. Data received between throttle pulses without leaving a 1 in the MSB of the 16-bit buffer is ignored and the buffer cleared. A redundancy is that updates should always be applied repeatedly as with Dshot commands, and throttle pulses should be continually sent at all times while not transmitting packets, which temporally precludes receiving any phantom packets. But really, I should have CRC like Dshot does on future iterations. This will be mandatory if the data rate is increased to avoid noise problems.

Here's the SimonK fork with this implemented. Remember to change your TIMING_MAX safety governor to something higher for higher speed setups like Ultracages and Hurricanes.

An example FlyShot transmitter is in this very much alpha S-Core firmware. There is plenty of commented out debug code and trial features in that, a few bugs (I don't think the tournament lock actually works when booted without the trigger down), plenty of non-final UI behaviors, ROF input isn't linear because I didn't bother yet, and also, there is some hackness going on with how that governor interrupt is turned off which is all either fixed or getting fixed right now - that was me trying to get variable speed and closed-loop STC (etc.) on the field right before a war.

The signal protocol works very solidly in general though. There are currently two adjustable speed T19s out there running it.

I'll have real example code and timing diagrams and whatnot later. I've got code to write, boards to design and darts to burn at the moment.

Tuesday, February 4, 2020

ACE LC ("low cost") 1.0 - Open Hardware SimonK ESC

Maybe I should get back into using this blog for blogging, huh? It's been a while, and as of the last year (ish) I have been digging into longstanding motor control and blaster management problems and designing PCBs. And of course first on my list of hobby problems that really, really need shooting down in flames: the infernal Great SimonK Drought of 2018-Present, a time of great sorrow for all flywheelers.

Hence, the "T-Verter Project", and what is about to become a line of assorted motor drives under the name, ACE. This is the first. Was back about 8 months when I designed this. Not my first inverter, but it is my first combat-ready practical one.

Here's some populated ones.

Feature rundown:

  • ATmega8 MCU @ 16MHz with ceramic resonator (of course)
  • Discrete gate drive similar to Afro and Spider boards and scores of good old ESCs, with some refinements to component specs to improve drive strength (etc.)
  • International Rectifier IRLR8743 mosfets
  • ST L5150BN LDO linear regulator (SOT-223) + 47uF tantalum bulk capacitance
  • 18k/3.3k/18k feedback network with 0.1% tolerance resistors
  • Minimum passive size: 0805
  • Minimum semiconductor size: SOT-23
  • All ISP pads at edge of board with other user signals!
  • 23 x 51.5mm - Narrower, but slightly longer, than a ZTW Spider 30.
  • 2 layer board!
  • Hand solderable!
  • Minimum trace/clearance widths of 10/10 mils (2oz copper required)
  • bs_nfet board target/pinout (same as ZTW Spider)
  • Multiple DC link capacitor options for fitment/preference. The 2 sets of cap pads are actually SMD 8mmOD electrolytic footprints, which enables a ready-to-run package of 23 x 51.5mm x ~half an inch thickness. Or, a regular big-ass lytic or two smaller ones can be hung off the end of the board.
  • Multiple wire routing options. Phase wire pads are down one side. Phase wires can exit basically anywhere you like.
Here's one set up like an Afro or regular old hobby ESC comes out of the pack - bus and signals in one end, phases out the other, cap off the end:

Note the presence of sufficient low-ESR DC link capacitance. That's a 1000uF Kemet ESY, and I recommend no less on this scale of board! Drone vendors cut corners - don't follow them. Ripple on the DC bus is bad.

Here's the Google Drive directory in which you will find the gerbers for the board, my component notes, and a Digikey BOM. I have also made my EasyEDA project public: though... be warned EasyEDA has had several major upgrades and bug fixes since I laid this down and sometimes something messes up in an old project that needs to be fixified a bit.

I'll be honest, I am not super happy with the LC 1.0. It's a bit janky, a bit rough, a bit old school. My layout/wire routing approach was kinda ill conceived, too nonspecific and lacks polish, there are some minor buswork current bottlenecks, my ZTW Spider-inspired choice of DPAK for a switching device package was distinctly outdated and the available DPAK mosfets are a bit dusty and creaky and a little frail compared to the wonders of modern silicon, there's probably more phase-node inductance than optimal, and the SMD electrolytic caps I wound up disliking and realizing were a rather daft idea in the first place as they are tricky to solder and their ESR is uncompetitive and really only good for smaller, lower current situations. Also, the final nag is that the resonator footprint is a bit tight on the pad sizes and makes it tricky to solder down.

But you know what, as a Spider/Afro-replacement ESC, it works great. Runs cool enough, eats up locked-rotor abuse testing and repeated 0-100% stomps with 67mm/70mm wheels and Emax RS2205Ses (pretty aggro little motor they are) without getting more than lukewarm, the logic power is solid and very good about not brownout-resetting the MCU even when I have accidentally flattened a battery with some of them, is excellent about startups and holds sync like a tick! I have a pair in combat service, I sold a T19 to a local with a pair in it, and at least one redditor has built and extensively run a pair and none of them have caused any trouble.

Much more to come though very soon (or you might have seen on reddit).

You can run any SimonK variant that you would on any other bs_nfet board. I recommend starting with the T19 tune and changing governor (and perhaps V/Hz current control) settings as/if required. For T19.100 series etc., you can use the published binaries. Remember to use the correct bs_nfet.hex board target: make bs_nfet.hex in your source tree.  Pay attention to the recommended fuse settings right there in the SimonK source and use avrdude to burn the fuses in that first time - KKMulticopterFlashTool will often set you to the wrong clock speed.