πŸ“³Model Execution & Token Metering

Compute is metered, and usage is monetized.

In ArtisanAI, AI model inference is not a passive backend operation β€” it is a token-incentivized, programmable economic activity. The protocol treats each model as a first-class on-chain actor, with usage governed by deterministic rules, resource tracking, and monetization logic.

Model Execution Workflow

When a user initiates a content generation request (e.g., by submitting a prompt), the following sequence occurs:

  • Model Selection

The request is routed to either a public protocol-curated model or a user-deployed custom model.

  • Inference Execution

The selected model processes the input and generates output on distributed or edge compute.

  • Usage Logging

The model’s CID, user wallet, execution parameters, and timestamp are logged on-chain.

  • Metering & Cost Calculation

Based on complexity and compute consumption, the execution is priced in $ART tokens.

  • Settlement & Distribution

$ART tokens are programmatically distributed to the model creator, infrastructure operator, and protocol treasury.

Dynamic Token Metering

The metering algorithm adjusts execution cost based on:

  • Model Type & Size

  • Inference Latency

  • Priority Tier

  • Storage Access

  • Concurrent Load

Monetization Rights for Model Owners

Model developers can configure monetization terms via smart contracts:

  • Fixed or tiered pricing curves

  • Access permissions

  • Remix licensing splits

  • Revenue streaming to DAO treasuries

Transparent Execution Trails

All inference events are:

  • Recorded on-chain

  • Linked to input & output assets

  • Auditable & queryable

  • Incentive-linked

Last updated