Recherche avancée

Médias (1)

Mot : - Tags -/graphisme

Autres articles (111)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (9635)

  • FFmpeg for automatic video generation from images ?

    22 janvier 2018, par Dionisis K

    I want to implement an automatic video generation from images, like facebook’s anniversary video. And I would also like to add some filters. I’ve been searching a lot and read FFmpeg’s documentation which is pretty amazing.

    In the future I want to have 10-20 "bashscripts" with different filters leading to different themes (e.g. fade-in, zoom, overlay images, left-to-right, etc..)

    So my biggest concern is having an interface which offers readable code and provides scalability. Since ffmpeg is a bashscript i searched for a wrapper

    There are wrappers in

    Are there any other wrappers worth to check ?
    Would you recommend something from the above ?
    Is there any pros and cons using a specific programming language for this particular case ?
    Is there anything else i should bear in mind before I make a decision ?

  • FFMPEG Thumbnail Generation in certain timespan

    15 novembre 2015, par michbeck

    I’m generating thumbnails in regular intervals based on the length of a video :

    ffmpeg -i "/my/dir/tmp/mymovie.mp4" -vf fps=4/259 /my/dir/tmp/123456/mymoviethumb%d.jpg

    Now I want to use just the first 30 seconds of the video and grab 5 thumbnails out of them. I’m stuck, can anybody help me out and give me an example command how can i do that ?

  • Bootstrapping an AI UGC system — video generation is expensive, APIs are limiting, and I need help navigating it all [closed]

    24 juin, par Barack _ Ouma

    I’m building a solo AI-powered UGC (User-Generated Content) platform — something that automates the creation of short-form content using AI avatars, voices, visuals, and scripts. But I’ve hit a wall with video generation and API limitations.

    


    So far, I’ve integrated TTS and voice cloning (using ElevenLabs), and I’ve gotten image generation working. But video generation (especially talking avatars) has been a nightmare — both financially and technically.

    


    🛠️ Features I’m trying to build :

    


    AI avatars (face + lip-syncing)
Script generation (LLM-driven)
Image generation
Video composition

    


    I’m trying to build an AI faceless content creation automtion platform alternative to Makeugc.com or Reelfarm.org or postbridge.com — just trying to create a working pipeline for automated content.

    


    ❌ Challenges so far :

    


    Services like D-ID, Synthesia, Magic Hour, and Luma are either paywalled, have no trials, or are very expensive.

    


    D-ID does support avatar creation, but you need to pay upfront to even access those features. There's no easy/free entry point.

    


    Tools like Google Veo 3 are powerful but clearly not accessible for indie builders.
I’ve looked into open-source models like WAN 2.1, CogVideo, etc., but I have no clue how to run them or what infra is needed.

    


    Now I’m torn between buying my own GPU or renting compute power to self-host these models.

    


    💸 Cost is a huge blocker

    


    I’ve been looking through Replicate’s pricing, and while some models (especially image gen) are manageable, video models get expensive fast. Even GPU rental rates stack up quickly, especially if you’re testing often or experimenting with pipelines. Plus, idle time billing doesn’t help.

    


    💭 What I could really use help with :

    


    Has anyone successfully stitched together APIs (voice, avatar, video) into a working UGC pipeline ?

    


    Should I use separate services (e.g. ElevenLabs + Synthesia + WAN) or try to host my own end-to-end system ?

    


    Is it cheaper (long term) to buy a used GPU like a 4090 and run things locally ? Or better to rent compute short-term ?

    


    Any open-source solutions that are beginner-friendly or have minimal setup ?
Any existing frameworks or wrappers for UGC media pipelines that make all this easier ?

    


    I’ve spent weeks researching, testing APIs, and hitting walls — and while I’ve learned a lot, I’d really appreciate any guidance from folks who’ve been here before.
Thanks in advance 🙏

    


    And good luck to everyone else trying to build with AI on a budget — this stuff isn’t as plug-and-play as it looks on launch videos 💀