Skip to main content

Integrating Your Own Models

There are multiple files you need to ammend for the Attribution Engine to work with your model. There exists a more comprehensive guide in Confluence.

Attribution Api

There are two files to ammend for attribution_api.

model.py

You need to implement a class containing 3 methods preprocess, predict and explain. In them, you will make calls to relevant endpoints where suitable.

class model(BaseMLModelStatic):
def __init__(self, sample_id: str, sample: bytes):
super().__init__(sample_id, sample)
self.sample_name = sample_name
self.content_type = content_type

def preprocess(self):
pass

def predict(self):
pass

def explain(self):
pass

uploads.py

After implementing your own model class, instantiate it and call it in the /sample endpoint.

@upload_router.post("/sample")
async def submit_sample(sample: UploadFile) -> SubmitSampleReturnValue:
...
# Add more models here
# FESTIVE
festive = FESTIVEModel(sample_id, sample_name, contents, content_type)
festive_results = festive.predict()
# Your model
...

Crystal Ball

There are two files to ammend for crystal_ball. (This is not the same crystal-ball as previously, it only retained the name during development because of similar purpose.)

queue_tasks.py

Add a new endpoint for your model by replacing model_name with a given name.

@queue_router.post("/model_name/{sha256_hash}")
async def queue_model_name_task(...) -> dict[str, str]:
...
task = get_prediction_model_name.delay()
return {"id": str(task.id)}

celery_worker.py

Implement the get_prediction_model_name function that was called above. This function will interact with your model endpoints.

@celery_service.task(bind=True, queue="celery")
def get_prediction_model_name(...):
pass