> It supports the conversion of PyTorch, TF/TF Lite, ONNX
I think it doesn't support TF Lite (on TF SavedModels) and ONNX haven't been supported anymore for quite a while sadly
As for the repo I like it, actually yesterday and today had to convert few models and that would be useful. I see you use Swift instead of coremltools so thats great - for benchmarking should have less overhead.
Some ideas:
1) Would love to have this also as agent skill
2) Would be good if we could parse xcode performance report file and print in human readable format (to pass to AI) - gemini pro was struggling for a while to figure out the json format.
The JSON output makes it easy to wrap as a tool for frameworks like LangGraph, but I would be worried about the latency. Since it is a CLI, you are likely reloading the whole model for every invocation. That overhead is significant compared to a persistent service where the model stays loaded in memory.
reply