Jobs

Track optimization status, browse job history, and download result artifacts.

Job Lifecycle

Every optimization job progresses through a fixed set of states.

queued
running
uploading
succeeded
or
failed

queued

Job is accepted and waiting for a worker to pick it up.

running

Optimization is actively being computed. Market data has been fetched.

uploading

Computation complete. Results and artifacts are being uploaded to storage.

succeeded

All methods completed. Artifacts are ready for download.

failed

An error occurred. Check error_code and error_detail for diagnostics.

GET
/jobs/{run_id}

Check the current status of an optimization job.

Authentication

Required — you must own the run (the job must have been submitted under your account).

Path Parameters

ParameterTypeDescription
run_idUUIDThe unique identifier returned when the job was submitted.

Response

200 OK

Successful job:

json
{
  "run_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "status": "succeeded",
  "created_at": "2025-01-15T10:30:00Z",
  "started_at": "2025-01-15T10:30:02Z",
  "finished_at": "2025-01-15T10:31:14Z",
  "error_code": null,
  "error_message": null,
  "error_detail": null
}

Failed job:

json
{
  "run_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "status": "failed",
  "created_at": "2025-01-15T10:30:00Z",
  "started_at": "2025-01-15T10:30:02Z",
  "finished_at": "2025-01-15T10:30:08Z",
  "error_code": 50002,
  "error_message": "DATA_FETCH_ERROR",
  "error_detail": "Unable to fetch price data for ticker INVALIDTICKER.NSE"
}

curl

bash
curl https://api.portfolioopt.in/jobs/a1b2c3d4-e5f6-7890-abcd-ef1234567890 \
  -H "Authorization: Bearer <YOUR_TOKEN>"

Python

python
import requests

BASE_URL = "https://api.portfolioopt.in"
TOKEN = "<YOUR_TOKEN>"
RUN_ID = "a1b2c3d4-e5f6-7890-abcd-ef1234567890"

headers = {"Authorization": f"Bearer {TOKEN}"}

response = requests.get(f"{BASE_URL}/jobs/{RUN_ID}", headers=headers)
response.raise_for_status()

job = response.json()
print(f"Status: {job['status']}")

if job["status"] == "failed":
    print(f"Error: {job['error_message']} - {job['error_detail']}")
GET
/jobs/history

List your optimization job history with cursor-based pagination.

Authentication

Required — returns only jobs belonging to the authenticated user.

Query Parameters

ParameterTypeDefaultDescription
limitinteger20Number of items per page. Maximum 100.
cursorstringOpaque pagination cursor. Pass the next_cursor value from the previous response to fetch the next page.
qstringSearch by run_id prefix. Useful for quickly finding a specific job.

Response

200 OK
json
{
  "items": [
    {
      "run_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
      "status": "succeeded",
      "created_at": "2025-01-15T10:30:00Z",
      "methods": ["MVO", "HRP"],
      "stock_count": 5
    },
    {
      "run_id": "f9e8d7c6-b5a4-3210-fedc-ba9876543210",
      "status": "running",
      "created_at": "2025-01-15T11:00:00Z",
      "methods": ["MinVol"],
      "stock_count": 8
    }
  ],
  "next_cursor": "eyJjIjoiMjAyNS0wMS0xNVQxMDozMDowMFoiLCJpIjoiYTFiMmMzZDQifQ=="
}

Pagination note

This endpoint uses cursor-based pagination. When next_cursor is null, you have reached the last page. Do not attempt to construct or decode cursor values — treat them as opaque strings.

curl

bash
# First page
curl "https://api.portfolioopt.in/jobs/history?limit=10" \
  -H "Authorization: Bearer <YOUR_TOKEN>"

# Next page (use next_cursor from previous response)
curl "https://api.portfolioopt.in/jobs/history?limit=10&cursor=eyJjIjoiMjAyNS0wMS..." \
  -H "Authorization: Bearer <YOUR_TOKEN>"

Python

python
import requests

BASE_URL = "https://api.portfolioopt.in"
TOKEN = "<YOUR_TOKEN>"
headers = {"Authorization": f"Bearer {TOKEN}"}

# Fetch all jobs using cursor-based pagination
all_jobs = []
cursor = None

while True:
    params = {"limit": 20}
    if cursor:
        params["cursor"] = cursor

    resp = requests.get(f"{BASE_URL}/jobs/history", params=params, headers=headers)
    resp.raise_for_status()
    data = resp.json()

    all_jobs.extend(data["items"])
    cursor = data.get("next_cursor")

    if not cursor:
        break

print(f"Total jobs: {len(all_jobs)}")
for job in all_jobs:
    print(f"  {job['run_id'][:8]}...  {job['status']}  ({job['stock_count']} stocks)")
GET
/jobs/{run_id}/artifacts

Retrieve downloadable artifacts (charts, data files, reports) for a completed job.

Authentication

Required — you must own the run.

Path Parameters

ParameterTypeDescription
run_idUUIDThe job identifier. Job must have status succeeded.

Response

200 OK
json
{
  "artifacts": [
    {
      "artifact_id": "art_001",
      "method": "MVO",
      "kind": "returns_dist",
      "label": "MVO Returns Distribution",
      "signed_url": "https://storage.portfolioopt.in/artifacts/art_001?sig=...",
      "content_type": "image/png"
    },
    {
      "artifact_id": "art_002",
      "method": "MVO",
      "kind": "max_drawdown",
      "label": "MVO Maximum Drawdown",
      "signed_url": "https://storage.portfolioopt.in/artifacts/art_002?sig=...",
      "content_type": "image/png"
    },
    {
      "artifact_id": "art_global_001",
      "method": null,
      "kind": "covariance_heatmap",
      "label": "Covariance Heatmap",
      "signed_url": "https://storage.portfolioopt.in/artifacts/art_global_001?sig=...",
      "content_type": "image/png"
    },
    {
      "artifact_id": "art_global_002",
      "method": null,
      "kind": "optimization_report",
      "label": "Optimization Report (Excel)",
      "signed_url": "https://storage.portfolioopt.in/artifacts/art_global_002?sig=...",
      "content_type": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"
    }
  ],
  "data_parquets": {
    "benchmark_returns": "https://storage.portfolioopt.in/parquet/bench_001?sig=...",
    "cumulative_returns": "https://storage.portfolioopt.in/parquet/cum_001?sig=...",
    "stock_yearly_returns": "https://storage.portfolioopt.in/parquet/yearly_001?sig=..."
  },
  "data_files": {
    "chart_bundle": "https://storage.portfolioopt.in/bundles/chart_bundle_001.json.gz?sig=..."
  }
}

Artifact Types

KindScopeContent TypeDescription
data
Global
application/vnd.openxmlformats-officedocument.spreadsheetml.sheetExcel report (label: "optimization_report")
data
Global
application/jsonPre-computed chart data bundle — gzip-compressed JSON (label: "chart_bundle")
report_pdf
Global
application/pdfPDF report (current format). Legacy runs may use kind="data" + label="optimization_report_pdf"
data
Global
application/parquetBenchmark daily returns (label: "benchmark_returns")
data
Global
application/parquetCumulative portfolio returns (label: "cumulative_returns")
data
Global
application/parquetYearly returns per stock (label: "stock_yearly_returns")

Signed URL expiry

All signed_url values expire after 1 hour. If you need to download artifacts after expiry, call this endpoint again to receive fresh URLs.

curl

bash
curl https://api.portfolioopt.in/jobs/a1b2c3d4-e5f6-7890-abcd-ef1234567890/artifacts \
  -H "Authorization: Bearer <YOUR_TOKEN>"

Python — Download All Artifacts

python
import requests
import os

BASE_URL = "https://api.portfolioopt.in"
TOKEN = "<YOUR_TOKEN>"
RUN_ID = "a1b2c3d4-e5f6-7890-abcd-ef1234567890"

headers = {"Authorization": f"Bearer {TOKEN}"}

# Fetch artifact metadata
resp = requests.get(f"{BASE_URL}/jobs/{RUN_ID}/artifacts", headers=headers)
resp.raise_for_status()
data = resp.json()

# Download each artifact using its signed URL
output_dir = f"./results/{RUN_ID[:8]}"
os.makedirs(output_dir, exist_ok=True)

for artifact in data["artifacts"]:
    ext = "png" if "png" in artifact["content_type"] else "xlsx"
    filename = f"{artifact['kind']}_{artifact['method'] or 'global'}.{ext}"
    filepath = os.path.join(output_dir, filename)

    print(f"Downloading {artifact['label']}...")
    download = requests.get(artifact["signed_url"])
    download.raise_for_status()

    with open(filepath, "wb") as f:
        f.write(download.content)

# Download Parquet data files
for name, url in data["data_parquets"].items():
    filepath = os.path.join(output_dir, f"{name}.parquet")
    print(f"Downloading {name}.parquet...")
    download = requests.get(url)
    with open(filepath, "wb") as f:
        f.write(download.content)

print(f"\nAll artifacts saved to {output_dir}/")

Error Codes

Job-specific error codes returned in the error_code and error_message fields.

CodeNameDescription
40401NOT_FOUNDJob not found or not owned by the authenticated user
50001OPTIMIZATION_FAILEDOptimization computation failed during processing
50002DATA_FETCH_ERRORCould not fetch market data for one or more tickers
50099UNEXPECTED_ERRORAn unexpected server error occurred