Denna kod är utformad för att hämta artiklar från Finfos API-tjänst som kräver autentisering. Den hanterar autentisering, paginering och felhantering för att säkerställa att alla artiklar för en given leverantör kan hämtas och sparas i JSON-format. Koden är strukturerad i flera delar för att organisera funktionaliteten och göra den lättare att underhålla.
The code lacks the necessary backoff structure for both authentication and the article request. This should be implemented in code that runs automatically.
Authentication with OAuth2
To retrieve articles, a valid access token obtained through OAuth2 authentication is required. The code includes functions to both obtain a new token and refresh it when it has expired. This ensures that API requests always have a valid token.
When the code is executed, a token retrieval is initiated through the function get_access_token(). This function performs an HTTP POST request to the authentication server using the client_secret, client_id, user, and password. The response contains an access token and information about when the token expires. This information is stored and used to authenticate subsequent API requests.
If a token has expired, the code uses the function refresh_access_token() to renew the token using a refresh token. This ensures continuous access to the API without the need for manual login.
Retrieving Articles
The main function fetch_all_articles() is responsible for retrieving all articles for a specific supplier. It handles pagination by incrementally increasing an offset and making repeated requests until all pages have been retrieved. The function also checks HTTP response codes to handle potential errors, such as when access to a supplier’s articles is denied.
The articles are retrieved as a list of JSON objects, with each article containing various fields of information. If the API returns an error or no articles are found, the code handles this by displaying appropriate messages.
Saving Articles to JSON Files
Once all articles for a supplier have been retrieved, they are saved in a JSON file. The filename includes the supplier’s ID and the current timestamp to make it easy to identify the data. This enables efficient storage and organization of retrieved articles for further processing or analysis.
You need to update these fields to match your own credentials. Note that the execution will take a long time if you have many suppliers.
-
client_secret = 'client secret'
-
user = 'your username'
-
password = 'your passwod'
import requests
import time
import json
from datetime import datetime, timedelta
# Configuration for OAuth2 authentication
access_token_url = 'https://sso.logiq.no/auth/realms/finfo/protocol/openid-connect/token'
client_id = 'finfo-api'
client_secret = 'client secret'
user = 'your username'
password = 'your passwod'
grant_type = 'password'
scope = 'openid'
# Token handling functions
def get_access_token():
"""Fetches a new access token from the authentication server."""
response = requests.post(access_token_url, auth=(client_id, client_secret), data={
'grant_type': grant_type,
'username': user,
'password': password,
'scope': scope
})
token_data = response.json()
token_data["expires_at"] = time.time() + token_data["expires_in"]
return token_data
def refresh_access_token(refresh_token):
"""Renews the access token using a refresh token."""
response = requests.post(access_token_url, auth=(client_id, client_secret), data={
'grant_type': 'refresh_token',
'refresh_token': refresh_token,
'scope': scope
})
token_data = response.json()
token_data["expires_at"] = time.time() + token_data["expires_in"]
return token_data
def is_token_expired(token_data):
"""Checks if the current token has expired."""
return time.time() > token_data["expires_at"]
tokens = get_access_token()
def get_headers():
"""Ensures each API call uses a valid access token."""
global tokens
if is_token_expired(tokens):
print("Token has expired, refreshing...")
tokens = refresh_access_token(tokens['refresh_token'])
return {'Authorization': 'Bearer ' + tokens['access_token']}
def fetch_all_articles(supplier_id, from_date, to_date):
"""
Fetches all articles for a given supplier within a date range.
Handles pagination automatically by fetching 1000 articles at a time.
Args:
supplier_id: The Finfo supplier ID
from_date: Start date for article changes (YYYY-MM-DD)
to_date: End date for article changes (YYYY-MM-DD)
Returns:
List of articles or None if access is denied
"""
articles = []
offset = 0
limit = 1000
while True:
url = (f'https://api.finfo.se/api/v2.2/article'
f'?finfosupplierid={supplier_id}'
f'&changed-from-date={from_date}'
f'&changed-to-date={to_date}'
f'&offset={offset}'
f'&limit={limit}')
response = requests.get(url, headers=get_headers())
if response.status_code == 403:
print(f"Access denied for supplier {supplier_id}")
return None
if response.status_code != 200:
print(f"Error: Received status code {response.status_code} for supplier ID {supplier_id}")
break
try:
data = response.json()
except requests.exceptions.JSONDecodeError:
print(f"JSONDecodeError: Could not decode JSON for supplier ID {supplier_id}")
break
article_list = data.get('articleList', [])
if not article_list:
print(f"No articles found for supplier {supplier_id} in the specified date range.")
break
articles.extend(article_list)
if len(article_list) < limit:
break
offset += limit
return articles
# Main execution
def main():
"""
Main function that:
1. Gets today's date and yesterday's date
2. Fetches all suppliers for the authenticated user
3. For each supplier, fetches all articles modified in the last 24 hours
4. Saves the articles to JSON files named with supplier ID and timestamp
"""
# Get today's date and yesterday's date for the date range
to_date = datetime.now().strftime("%Y-%m-%d")
from_date = (datetime.now() - timedelta(days=1)).strftime("%Y-%m-%d")
headers = get_headers()
response = requests.get('https://api.finfo.se/api/v1.0/suppliers/mysuppliers', headers=headers)
suppliers_data = response.json()
for supplier in suppliers_data:
finfoSupplierId = supplier['finfoSupplierId']
articles_data = fetch_all_articles(finfoSupplierId, from_date, to_date)
if articles_data:
current_time = datetime.now().strftime("%Y%m%d%H%M%S")
file_name = f'{finfoSupplierId}_{current_time}.json'
with open(file_name, 'w', encoding='utf-8') as jsonfile:
json.dump(articles_data, jsonfile, ensure_ascii=False, indent=4)
print(f"Articles for supplier {finfoSupplierId} have been saved to {file_name}.")
else:
continue
print("Articles have been saved to JSON files.")
if __name__ == "__main__":
main()