Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (50)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (6972)

  • Adding a image resource over video file from sd card using ffmpeg or MediaMuxer for android

    18 septembre 2014, par Alin

    I am stuck in this area which I am not comfortable at all to work in.

    Here is what I did so far :

    • Made an Ubuntu VirtualBox machine
    • Downloaded latest ffmpeg version which is 2.3.3
    • Compiled ffmpeg to be compatible with armv7-a so in the end I get two folders : include and lib. In include I have the headers and in libs the *.so files (just as in http://www.roman10.net/how-to-build-ffmpeg-with-ndk-r9/)

    enter image description here

    I have created a new android project and made a jni folder and this is how far I went... Even this, with all the struggle being new to linux and compiling took me almost a week to reach.

    Adding a watermark in ffmpeg I believe it is done on libavfilter ? I have to dig on this matter, however the original ffmpeg I need to translate into my project is :

    ffmpeg -i input.avi -i logo.png -filter_complex 'overlay=10:main_h-overlay_h-10' output.avi

    As far as I am studying now I need to do inside jni :

    • create a add_watermark.c file in which I need to somehow call the function that does the filter overlay call
    • create Android.mk to load this and the ffmpeg needed libraries

      LOCAL_PATH := $(call my-dir)

      include $(CLEAR_VARS)

      LOCAL_MODULE := add-watermark

      LOCAL_SRC_FILES := add-watermark.c

      LOCAL_LDLIBS := -llog -ljnigraphics -lz

      LOCAL_SHARED_LIBRARIES := libavformat libavcodec libswscale libavutil

      include $(BUILD_SHARED_LIBRARY)

      $(call import-module,ffmpeg-2.3.3/android/armv7-a)

    • create Application.mk

      APP_ABI := armeabi-v7a

      APP_PLATFORM := android-8

    • run ndk-build and use the generated libraries in my android project.

    I really need help on continuing, so every answer is received with great attention and pleasure.

    Later Edit :
    Would it be possible to somehow build ffmpeg.exe as a library and call its main with the exact same parameters as the original exe ? I do not want to run ffmpeg as a standalone executable, but have it integrated within the project.
    Something like http://www.roman10.net/how-to-port-ffmpeg-the-program-to-androidideas-and-thoughts/ What downsides would this approach have ?

    Later edit 2 : if this is possible by using MediaMuxer or other APIs added in android 4.3 I am open to it you sample codes are provided. I did look over the MediaCodec and MediaMuxer samples also Grafik and haven’t found a proper way to do what I wanted. I prefer ffmpeg approach better if it works

  • FFMPEG GIF with transparency from png image sequence

    11 février 2020, par Nick S

    I’ve been trying to use ffmpeg to create a gif with a transparent background, but whenever the movement goes on top of the background, the pixels stay there. It’s a tree with a wind animation, this is how it ends up : https://i.imgur.com/pq4ArBG.png

    I first try to create the palette, and then the gif :

    ffmpeg -i Tree_%04d.png -vf palettegen=reserve_transparent=1 palette.png

    ffmpeg -framerate 30 -i Tree_%04d.png -i palette.png -lavfi paletteuse=alpha_threshold=128 treegif.gif

    It seems the previous frames simply stay there, but I can’t figure out how to dispose of them.

  • Adjust Opus Ogg page size using Go

    15 mars 2024, par matthew

    Using the code from Go's Pion WebRTC library play-from-disk example I'm able to create a WebRTC connection and send audio from a local Ogg file to peers.

    


    The play-from-disk example README.md details how to first convert the page size of the Ogg file to 20,000 using ffmpeg, like so :

    


    ffmpeg -i $INPUT_FILE -c:a libopus -page_duration 20000 -vn output.ogg


    


    I'd like to make this same adjustment to Ogg data natively in Go, without using ffmpeg. How can this be done ?

    


    I've tried using Pion's oggreader and oggwriter, but using these requires deep Opus file format and RTP protocol knowledge that neither I nor ChatGPT seem to have.

    


    For additional context, I'm using a Text-to-Speech (TTS) API to generate my Ogg data as follows :

    


    req, err := http.NewRequest("POST", "https://api.openai.com/v1/audio/speech", bytes.NewBuffer([]byte(fmt.Sprintf(`{
    "model": "tts-1",
    "voice": "alloy",
    "input": %q,
    "response_format": "opus",
}`, text))))

req.Header.Add("Authorization", "Bearer "+token)
req.Header.Add("Content-Type", "application/json; charset=UTF-8")

client := &http.Client{}
resp, err := client.Do(req)



    


    As I'm trying to create a real-time audio app, ideally, I'd like to pipe the response to WebRTC performing the conversion on chunks as these are received so that peers can start to listen to the audio before it has been fully received on the server.