Let AI do the reporting, so AMs can do the thinking

How we automated the entire reporting pipeline and got back 10+ hours a week to actually do our jobs.

Ecomobi  ·  AI love TikTok Team  ·  May 2026

The Idea Behind This

This solution uses AI to automate the entire reporting process for Account Management team— from data collection and cleaning, to performance calculation and basic insight generation. Instead of spending hours on repetitive and manual tasks, we rely on AI to generate structured, accurate, and ready-to-use reports. This frees us up to focus on what actually matters: deeper analysis, strategic thinking, and meaningful conversations with clients. By removing manual workload and minimizing human error, the solution doesn’t just make us faster — it makes our reporting more valuable.

Who Is This For?

TeamRole in the Reporting FlowBenefit from This Solution
AM Direct Brand ⭐ PrimaryWe own the reporting relationship with enterprise brands (Unilever, P&G, L’Oreal), especially Unilever— responsible for data accuracy, insight quality, and delivery timeliness every week.Eliminates 90%+ of our manual reporting work. We shift focus to strategy, analysis, and client communication.
CIR Creator TeamManages creator network performance — tracking Affiliate creators by tier and content type. Consumes our Affiliate reports to evaluate creator output.Gets clean, auto-generated creator performance breakdowns without waiting on manual processing.
OP Live TeamOperates the Brandlive LIVE sessions across brand channels. Provides operational context (stock events, voucher status, budget burn) that feeds directly into AI insight generation.Their operational inputs become visible and actionable in every report — no more context lost in Slack threads.
Table 1: Teams involved and how each benefits from this solution

Demo Video: https://drive.google.com/drive/folders/1cHHwtVvjstcjNTAbU8sv2SedmDujZh2s

Part 1: What We Were Actually Doing Every Week

As being AMs at Ecomobi, we handle two TikTok Shop service lines: Affiliate Marketing — managing creator campaigns across Livestream and Video formats — and Self-selling Live (Brandlive), where we operate client-owned LIVE channels across multiple brand accounts. Our clients — Unilever, P&G & L’Oreal— are enterprise-grade brands. They expect detailed weekly breakdowns, WoW and MoM trend analysis, and clear optimization recommendations, all delivered within 48 hours of data cut-off.

Before this solution, meeting those expectations meant one thing: hours of manual work, every single week — before we could even start thinking strategically.

Affiliate Reporting — Every Monday

Our Affiliate team tracks a creator network spanning multiple tiers — mega, macro, micro, and nano — across both Livestream and short Video content. Every Monday, we go through four sequential manual steps:

TaskTimeWhat We Actually DoPeopleRisk
Download data from admin tool~5 minLog into the internal portal, manually export raw creator performance data — repeated every Monday1Version mismatch if exported at the wrong time
Process data & map creator types~20 minCross-reference every creator row against our classification list (mega/macro/micro/nano) in Excel using VLOOKUP1Easy to mis-map; no validation step
Map data into brand template~60 minCopy-paste processed data into the client’s proprietary report format, manually apply formulas and formatting rules2Formula errors cascade — one wrong cell affects the whole report
Find insights & write action plan~180 minWe read through the numbers, interpret trends, and write recommendations from scratch — no AI, no framework, just us and a deadline2Quality varies week to week depending on who’s writing and how much time is left
Table 2: Our Affiliate weekly reporting tasks — before AI

Total time per week: ~4 hours 45 minutes, across 2 AMs. Most of it is mechanical — downloading, mapping, filling templates. Only the last 3 hours involve actual thinking. But by the time we get there, we’re already tired from the mechanical steps, and the deadline is closing in.

Brandlive Reporting — Every Tuesday

Brandlive is operationally more complex. Each LIVE session on each brand channel generates its own dataset: GMV per hour, orders, AOV, CVR, traffic sources, SKU sell-through, voucher redemptions. With 5 brand channels to manage, the data volume is significant — and every Tuesday, we pull all of it by hand:

TaskTimeWhat We Actually DoPeopleRisk
Download data from TSP — session & SKU level, 5 brand channels~120 minLog into TikTok Shop Partner separately for each of our 5 channels, export session data AND SKU data one by one — 10 manual downloads, different formats each time4Channels get mixed up; sessions misattributed; formats inconsistent across brands
Map to campaign tier & fix format~30 minManually match each session and SKU row to the right campaign tier (Tier 1/2/3). Fix date formats, remove duplicates, handle missing rows4Tier mis-assignment causes wrong benchmark comparisons downstream
Build aggregated report by brand template~60 minConsolidate all 5 channels into one master report showing WoW GMV, orders, AOV, CVR, and traffic by source2Double-entry risk; template logic differs per brand and is easy to mix up
Find insights & write action plan~240 minWe manually read through numbers, try to explain performance gaps, and recommend next steps — often without having the full operational context from LIVE sessions4Operational events (stock-outs, voucher depletion, budget burnout) frequently missed or forgotten
Table 3: Our Brandlive weekly reporting tasks — before AI

Total time per week: ~7 hours 30 minutes, across 4 people. The data download step alone takes 2 hours — logging into TikTok Shop Partner separately for each of the 5 brand channels and exporting both session-level and SKU-level data one by one. That’s 10 separate manual downloads, every Tuesday, before we’ve written a single insight.

Combined across both services, we were spending over 12 hours every week — just on reporting. More than one and a half full working days, consumed by tasks that produced no direct strategic value on their own.

Part 2: What Was Breaking — and Why It Mattered

The hours were the visible problem. But the downstream effects were where things got serious.

Data accuracy errors

When we’re pulling from multiple sources, mapping across Excel sheets, and filling templates manually, errors are almost unavoidable. A wrong VLOOKUP. A formula referencing the wrong column. A date range that’s one day off. These feel like small mistakes — until they land in a report going to Unilever’s regional marketing team.

Brands flagged discrepancies. Sometimes a number didn’t match what they saw on their own TikTok dashboard. Each time it happened, we had to go back, find the error, rebuild the affected section, and re-send. More hours lost — and more importantly, trust eroded.

No time buffer between data cut-off and delivery

Our data cut-off is Sunday midnight. Reports are expected Tuesday or Wednesday. That gives us roughly 36–48 hours to download, process, map, analyze, write, and deliver — across two different service lines. With a 12-hour manual pipeline, there’s almost no margin for anything to go wrong.

The result is predictable: the section that requires the most thinking — insights and action planning — is always the most rushed. The highest-value part of our report consistently gets the least time.

Insight quality was inconsistent and often incomplete

Writing good insights isn’t just about reading numbers — it requires context. Why did GMV drop on Thursday? Was it a platform issue? Did the voucher budget run out at 2pm? Was a key SKU out of stock? Did a creator post at the wrong time?

In our manual process, this context lives in people’s heads — or in a Slack thread that may or may not get read before write-up time. If we’re not the same person who handled operations that week, the context gets lost. We end up writing descriptive insights: “GMV decreased 15% WoW” — without the “because” that brands actually need to act on.

This wasn’t a skills problem. It was a structural problem. The right information existed — it just had no systematic way of making it into the report.

Brand satisfaction suffered

For clients like P&G and L’Oreal, a weekly report isn’t just a document. It’s the basis for real decisions: Do we increase budget next week? Shift creator tier mix? Restock SKU X before the next LIVE? If the data is inaccurate or the insights are vague, none of those decisions can be made with confidence.

That leads to follow-up calls to re-explain numbers, delayed decisions on the brand side, and a client that’s quietly wondering whether the agency partnership is actually delivering value.

Part 3: The Solution — Let AI Handle Everything It Can

The framing matters here: the goal wasn’t to “use AI somewhere in the process.” The goal was to remove every reporting task that doesn’t require our judgment — and let AI handle all of it. What’s left for us is the part that actually needs us: interpreting context, applying strategic thinking, and communicating with clients. That’s the thinking part. Everything before it is the reporting part. And the reporting part should run itself.

Step 1 — Automated Data Ingestion

The first step was eliminating manual data downloads entirely.

  • For Affiliate: A scheduled tech crawler connects to our internal admin tool and pulls raw creator performance data automatically — views, GMV, orders, CTR, CVR — every day. No one needs to log in. No manual export. By Monday morning, the data is already clean and sitting in a structured database.
  • For Brandlive: A multi-channel crawler connects to TikTok Shop Partner (TSP) and pulls both session-level and SKU-level data across all 5 brand channels simultaneously. It runs nightly — so we start the week with data already ready, not a to-do list.

Data is validated on ingestion — missing fields, channel attribution errors, and anomalies are flagged automatically before they reach the report. No more discovering errors at the finish line.

Step 2 — Operational Context from the OP Live Team

Most automated systems ignore the operational layer. That’s exactly why auto-generated insights often feel shallow — they can describe what happened in the data, but miss why it happened on the ground.

We built a structured input channel so the OP Live team can log key events during and after each LIVE session:

  • Voucher depletion: when promotional vouchers ran out and at what time
  • Stock exhaustion: which SKUs went out of stock and when during the session
  • Budget burnout: when media spend was fully used before the session ended
  • Technical disruptions: stream drops, platform errors, payment gateway failures

These logs go directly into the AI insight engine. Now, when we look at the report, we can see that GMV dropped at 3pm because the voucher budget ran out at 2:45pm — not because the audience lost interest or the creator underperformed. That’s a completely different insight. And because the OP Live team’s input is structured and captured systematically, it never gets lost in a Slack thread again.

Step 3 — AI-Powered Template Mapping & Report Generation

Each brand has its own reporting template — its own columns, KPI formulas, formatting rules, and comparison logic. Learning a new brand template used to mean weeks of shadowing. Now, the AI learns it once and applies it perfectly every week:

  • The AI ingests the template schema and maps each field to the correct data source in our database
  • When new data arrives, it auto-populates the full template — including all WoW and MoM delta calculations and trend indicators
  • Any metric that moves beyond a defined threshold (e.g., CVR drops >15% WoW, GMV down >20% MoM) is automatically flagged and highlighted for our review
  • Formatting, number formats, and layout are applied exactly as each brand expects — consistently, every week

What used to take 60–120 minutes of copy-paste and formula-checking now takes seconds. With accuracy we couldn’t guarantee manually.

Step 4 — AI-Assisted Insight Generation

This is where our role as AMs transforms most visibly. AI doesn’t replace our judgment — it structures and accelerates it.

The AI analyzes performance data and the OP Live team’s operational logs together, then produces structured insight drafts in a consistent format:

  • Observation: What changed this week vs. last week and last month — with specific numbers
  • Likely Cause: Based on data patterns cross-referenced with the OP Live operational context
  • Recommended Action: Specific, prioritized next steps for the brand to act on

We review the draft, validate the logic, add any strategic nuance that needs account-level knowledge, and send. That review takes about 30 minutes — down from 3 to 4 hours. And the quality is higher, because the AI never forgets the operational context, never runs out of time, and always follows the same analytical framework regardless of which AM is on duty.

The biggest shift is in what we’re actually doing. Before: we were data processors who occasionally wrote strategy. After: we are strategists who occasionally review AI-generated drafts. That’s a fundamentally different job — and a much more valuable one for our clients.

Part 4: Before vs. After

TaskServiceBeforeAfterPeopleWhat AI Does
Data downloadAffiliate5 min / 1pAuto0Scheduler pulls from admin tool daily
Data processing & creator mappingAffiliate20 min / 1pAuto0AI matches creators to classification tiers
Brand template mappingAffiliate60 min / 2pAuto0AI populates template + calculates WoW/MoM
Insights & action planAffiliate3h / 2p30 min / 2p2 AMsAI drafts → we validate & add strategy
Data download (5 channels)Brandlive2h / 4pAuto0Multi-channel crawler pulls nightly from TSP
Tier mapping & format fixBrandlive30 min / 4pAuto0Rule-based engine handles tier assignment
Aggregated brand reportBrandlive1h / 2pAuto0AI assembles and formats the final report
Insights & action planBrandlive4h / 4p30 min / 2p2 AMsAI + OP Live context → draft → we review
Table 4: Task-by-task comparison — before and after AI implementation

Here’s the full task-level comparison across both services:

Affiliate total: 4h 45min → 30 minutes   (89% time reduction)

Brandlive total: 7h 30min → 30 minutes   (93% time reduction)

Combined: ~12h 15min → ~1 hour per week   (over 90% reduction)

What changed beyond the hours

MetricBeforeAfter
Total weekly reporting hours~12h 15min across both services~1 hour — insights review only
AMs involved in reportingUp to 6 people per week2 people (review & strategic add-on only)
Data accuracyFrequent errors; brand escalations every few weeksNear-zero errors — automated validation at every step
Report delivery time1–2 days post data cut-off (tight deadline every week)Same day or next morning after cut-off
Insight consistencyVariable — depended on who wrote it and how much time we had leftStandardized AI framework applied to all brands, every week
Operational context in insightsOften missing — stock events or voucher issues forgotten by write-up timeCaptured by OP Live team and fed directly into the AI engine
Brand satisfactionModerate — complaints on delays and data accuracyImproved — faster delivery, accurate data, clearer action items
AM time for strategic workAlmost none — we were fully consumed by reporting tasks~10h+/week redirected to analysis, strategy & client relationships
Table 5: Outcome summary across all key dimensions

Part 5: What We Do With the Time We Get Back

Freeing up 10+ hours per week isn’t just an efficiency gain. It changes what’s actually possible for us as AMs.

From reactive reporting to proactive strategy

When reporting takes 12 hours a week, we’re always in catch-up mode. We finish one report and it’s almost time to start the next. There’s no room to think ahead — to proactively spot a trend before it becomes a problem, or to prepare a brand for what’s coming in the next campaign cycle.

With that time back, we can run proactive account reviews, build campaign scenarios for upcoming periods, and surface strategic opportunities before clients even ask. That’s the difference between an AM who delivers reports and an AM who drives decisions.

Better, more confident client conversations

When a brand receives an accurate report with clear causal insights and specific action recommendations, the entire conversation changes. Instead of “can you double-check this number” or “we’re not sure what drove the drop,” the meeting becomes: “here are the top 3 things we’re optimizing for next week — do you agree?”

That’s the relationship that builds long-term partnerships. And it’s the relationship we’re now positioned to have.

Consistent quality across all brands

Before, insight quality depended on who wrote the report and how much time was left. Now, every brand — whether it’s Unilever or L’Oreal — gets a report built on the same structured analytical framework. Our job isn’t to create the framework anymore. It’s to apply our expertise to the output. That’s a much better use of what we actually bring to the table.

We can scale without burning out

Before this solution, adding a new brand client meant adding reporting hours — and usually, adding people to absorb those hours. The cost of growth was linear and it came at a human cost.

Now, onboarding a new brand means loading their report template into the AI engine and adding their credentials to the crawler. Hours of setup, not weeks of training. We can take on more clients without the team collapsing under the reporting weight of each new one.

What We Learned

A few things that became clear as we went through this process:

01  Ask the right question first. Not “how do we speed up the report” — but “which parts of the report actually require a human.” Most of our pipeline didn’t. Seeing that clearly was the first step to changing it.

02  Operational context is the missing ingredient. Data tells you what happened. The OP Live team knows why. The key was building a structured bridge between the two — and that’s what makes our insights actually useful instead of just descriptive.

03  A standardized framework makes the AM’s review step faster and better. When the AI always follows Observation → Cause → Action, we know exactly what to look for and where to add our own judgment.

04  Removing manual steps improves both speed and accuracy at the same time. Most of our data errors came from manual data entry — copy-paste mistakes, wrong cell references, mismatched formats. Automating that layer didn’t just save time. It eliminated our biggest source of inaccuracy.

The goal was never to remove AMs from the reporting process. It was to make sure that when we’re involved, we’re doing something that actually requires us — analysis, judgment, strategy, and client relationships. Everything else should run itself.

Reporting used to be the job. Now it’s just the starting point. And that changes everything about what we’re able to do for our clients.

— AI love TikTok Team, Ecomobi

Related Posts