AlmightySnoo, (edited )

I think you may be referring to the Android NN API, which apparently should use the backend targeting Google Tensor SoCs when available. From this commit it seems it should be available on GrapheneOS too: github.com/…/b60bfdd87550bf20f6cb73234a1a8ed2ecd6… (EdgeTPU is actually the ASIC differentiating Tensor SoCs from the rest by allowing fast and low-power inference from a hardware level).

Oisteink,

Is this Google Tensor SoC?
I’ve seen no public api for using it.

Edit: would this be the type of api you refer to?

j4k3,
@j4k3@lemmy.world avatar

The second link is closer. I think it is technically the edge TPU that is used to handle ML stuff.

It would take a higher level of accessibility for me to be able to engage with in practice. Like I need a hugging face type of high level accessibility to have a chance of getting it working in practice. I’m curious if anything like this exists. The available RAM probably limits anything really useful. It might be interesting to see what kind of edge processing could be mixed with an offline model running on a local server. I can connect to models over LAN already but my largest models are slow.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • wartaberita
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • [email protected]
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • TheResearchGuardian
  • Ask_kbincafe
  • KbinCafe
  • Testmaggi
  • Socialism
  • feritale
  • oklahoma
  • SuperSentai
  • KamenRider
  • All magazines