What's New
Every update to Social Neuron is listed here with the problem it solves and the impact for your workflow. Most recent changes appear first.
April 2026
Developer Documentation Site
Problem: Developers integrating with Social Neuron's API had to piece together information from scattered sources.
What changed: A dedicated documentation site with auto-generated API reference for both the REST API and MCP Tools API, complete with request/response examples, authentication guides, and interactive exploration.
Impact: If you are building integrations, automations, or connecting Social Neuron to your own tools, everything you need is in one place. The OpenAPI spec means you can generate client SDKs in any language.
March 2026
MCP Server v1.6.1 -- 60 Tools for AI Agents
Problem: AI-native workflows required copying and pasting between Social Neuron and AI assistants like Claude.
What changed: The MCP server now exposes 60 tools across 23 modules -- ideation, content creation, distribution, analytics, brand management, comments, planning, autopilot, quality checks, and more. OAuth 2.0 with PKCE authentication. Available as an npm package or via the HTTP endpoint.
Impact: Connect Social Neuron to Claude Code, Claude Desktop, or any MCP-compatible client and control your entire content pipeline through conversation. Ask your AI assistant to "generate 5 content ideas about sustainable fashion and schedule the best one for Thursday" and it executes the full workflow.
Brand Brain Tiered Extraction
Problem: Brand profiles captured surface-level information, resulting in AI-generated content that sounded generic rather than on-brand.
What changed: Brand Brain now performs tiered extraction, capturing 40+ data points about your voice, audience, visual identity, and messaging pillars. An alignment scoring system evaluates how well each piece of generated content matches your brand profile.
Impact: AI-generated content sounds significantly more like you. The alignment score gives you a quick way to spot content that drifted from your brand voice before it goes live. The more detail you add to your brand profile, the tighter the match.
Closed-Loop Learning Engine
Problem: Content suggestions were based on general best practices, not your actual performance data.
What changed: The learning engine now connects your analytics directly to the ideation pipeline. When content performs well (high engagement, strong watch time), those patterns feed into future ideation. Underperforming patterns get deprioritized automatically.
Impact: Your content ideas improve over time based on what actually works for your audience -- not just what works in general. The system identifies patterns like "videos with questions in the first 3 seconds get 40% more watch time" and applies them to future suggestions.
Quality Gate for Pre-Publish Validation
Problem: Some AI-generated content shipped with issues -- off-brand tone, weak hooks, or formatting problems -- that only became obvious after publishing.
What changed: A quality gate now evaluates content before publishing, checking for brand alignment, hook strength, caption quality, and platform formatting compliance.
Impact: You get a quality signal before hitting publish. The gate flags potential issues and suggests improvements, acting as a second pair of eyes on every piece of content.
Auto-Scheduled Analytics Refresh
Problem: You had to manually check analytics to see how recent posts performed.
What changed: Analytics now refresh automatically at 1 hour, 6 hours, and 24 hours after each post goes live. Performance data feeds directly into the learning engine.
Impact: No more manual refreshing. Your analytics dashboard stays current, and the learning loop starts working from your very first post.
Avatar Lab with Multiple Providers
Problem: Avatar video quality and style options were limited to a single provider.
What changed: Avatar Lab now supports three providers -- HeyGen for professional presentations, D-ID for talking photo avatars, and Creatify for product showcase videos. Each offers different visual styles and pricing tiers.
Impact: Choose the avatar style that fits your content. Use HeyGen for polished brand videos, D-ID for quick social clips, or Creatify for product-focused content with AI presenters.
Video Editor
Problem: Generated videos needed external tools for trimming, captioning, and reformatting before they were ready to publish.
What changed: A built-in video editor with trimming, captions, music overlay, timing adjustments, and multi-format export. Edit any generated video directly in the browser.
Impact: No need to export to a separate editing tool. Generate a video, trim the intro, add captions, export in 9:16 for TikTok and 16:9 for YouTube, and send to distribution -- all without leaving Social Neuron.
February 2026
Storyboard Studio
Problem: Creating multi-scene videos required manual scripting, separate image generation for each scene, and external editing to assemble the final product.
What changed: Storyboard Studio walks you through four phases -- script writing, scene planning, AI visual generation, and final assembly with transitions and audio. You end up with a complete video from a single brief.
Impact: Turn a content brief into a finished multi-scene video without touching a video editor. Particularly useful for explainer content, product walkthroughs, and narrative-style social videos.
Autopilot Mode
Problem: Maintaining a consistent posting schedule required daily manual work -- creating content, writing captions, and scheduling posts.
What changed: Autopilot generates and publishes content on a schedule you set. Choose your topics, frequency, and platforms. Brand Brain ensures everything matches your voice. Content goes through a review queue before publishing (or you can enable fully automatic posting).
Impact: Set a posting cadence and let the system handle execution. Particularly valuable during busy periods when you cannot create content daily but still need to maintain presence.
Flow Builder
Problem: Complex content workflows (ideate, create, review, format, schedule) required manually moving content through each step.
What changed: A visual drag-and-drop workflow designer where you chain together ideation, creation, review, and publishing steps. Add conditions and branching logic. Trigger flows on a schedule or manually.
Impact: Build once, run repeatedly. A flow like "every Monday, research trending topics, generate 3 video concepts, create the top-scoring one, and schedule it for Wednesday" runs automatically.
Multi-Platform Distribution with Per-Platform Formatting
Problem: Publishing the same content to multiple platforms meant manually reformatting captions, aspect ratios, and hashtags for each one.
What changed: Select multiple platforms when distributing and Social Neuron handles formatting automatically. Each platform gets a version optimized for its requirements.
Impact: One-click multi-platform publishing. A single piece of content gets formatted correctly for YouTube, TikTok, Instagram, and LinkedIn simultaneously.
January 2026
Ideation Station with Four Modes
Problem: Coming up with content ideas required manual research, trend monitoring, and brainstorming -- the most time-consuming part of content creation.
What changed: Four AI-powered ideation modes -- Research (data-backed ideas), Brainstorm (rapid inspiration), Campaign (cohesive content series), and Trends (what is performing now in your niche). Each generates ideas tailored to your brand voice and audience.
Impact: Start every content session with AI-generated ideas that already match your brand. Ideas cost just 1 credit each, so you can generate dozens and pick the best ones.
Generative Studio with 35+ AI Models
Problem: Existing AI content tools locked you into one model with one style, limiting creative range.
What changed: Access to 35+ AI models for video and image generation. Easy Mode auto-selects the best model for your task. Pro Mode gives full control over model choice, parameters, and settings.
Impact: Match the right AI model to each piece of content -- cinematic video from Veo 3, artistic images from Midjourney, photorealistic shots from Imagen, budget drafts from Grok Imagine. No single-model lock-in.
Content Library with Project Organization
Problem: AI-generated content was scattered across different tools with no central hub for browsing, filtering, or redistributing.
What changed: Every piece of content you create is automatically saved to the Content Library, organized by project, filterable by type, status, platform, and date.
Impact: Never lose a generation. Find any past creation instantly, repurpose high performers for new platforms, and track your full content history in one place.
YouTube Publishing Integration
Problem: Publishing AI-generated videos to YouTube required downloading the file and manually uploading through YouTube Studio.
What changed: Direct publishing to YouTube from Social Neuron. Set titles, descriptions, tags, thumbnails, and schedule -- all from the distribution screen.
Impact: The creation-to-publish pipeline is fully connected. Generate a video, edit it, write the metadata, and schedule it for YouTube without leaving the platform.
Credit System with Transparent Pricing
Problem: AI content generation costs were unpredictable, making it hard to budget for content production.
What changed: A simple credit system where 1 credit = $0.01. Every action shows its credit cost before you commit. Real-time balance in the navigation bar. Detailed usage history in billing.
Impact: You always know exactly what something costs before you generate it. No surprises on your bill. Easy Mode optimizes for your remaining balance.
Most features listed here are available on all plans. Autopilot, Flow Builder, and full API access require a Pro plan ($79/month) or higher. Check Credits and Plans for the full breakdown.
Ready to try this? Sign up free and get 100 credits to start.