Recherche avancée

Médias (0)

Mot : - Tags -/optimisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (63)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (10975)

  • Moviepy Making a CompositeVideoClip of two concatenate_videoclips

    16 avril 2018, par Max Better

    Working with the Moviepy lib and I’ve been beating my head off a wall with this last step for a while.

    GifClips = concatenate_videoclips(TheGIFs, method='compose')
    TextClips = concatenate_videoclips(TheTexts, method='compose')

    I’ve written both of these to separate files and they look fine. But I’m having a problem getting them to combine properly.

    I’m trying :

    FinishedClips = CompositeVideoClip([GifClips, TextClips], size=(1920,1080))

    It has the audio from TextClips and shows the GifClips but the text isn’t visible. It did show when written alone without the composite.

    It does work if I combine GifClips with a single TextClip but this doesn’t work when I need text clips to run one after another.

    I could run a CompositeVideoClip with every single TextClip and a part of the GifClips and then concatenate them all together but that doesn’t seem like the neatest way of doing this. My guess is there’s a fairly obvious argument here somewhere but looking through the docs and examples I’m struggling so far.

    Any suggestions on how I could get the TextClips clip to show up properly in a composite would be much appreciated.

  • FFmpeg :Run one bat after one bat

    16 avril 2018, par AniEncoder

    well I tried to run a .bat file after a file gets downloaded by qBittorrent, the .bat will change font and rename it and move it another folder.

    There are 3 problems
    1. two .bat needed to run...font & rename
    2. the file gets copied in to folders

    I need to add some gap between EngSub, if i do that, it donot shows in the file output

    Change Font
    for %%A in (*.mkv) do ffmpeg -i "%%A" -map 0:s:0 "%%A.ass"

       ren *.ass *.txt

       setlocal

       @echo off
       call :FindReplace "Open Sans Semibold" "CronosPro-Bold" "*.txt"
       call :FindReplace "&H00020713" "&H0000003B" "*.txt"
       call :FindReplace ",1.7," ",2," "*.txt"
       @echo on

       ren *.txt *.ass

       for %%A in (*.mkv) do ffmpeg -i "%%A" -i "%%A".ass -c copy -map 0:0 -map 0:1 -map 1:0 -metadata:s:s:0 language=eng -disposition:s:0 default -attach CronosPro-Bold.ttf -metadata:s:3 mimetype=application/x-truetype-font "..\720p\%%A"

       del *.ass

       :FindReplace <findstr> <replstr> <file>
       set tmp="%temp%\tmp.txt"
       If not exist %temp%\_.vbs call :MakeReplace
       for /f "tokens=*" %%a in ('dir "%3" /s /b /a-d /on') do (
         for /f "usebackq" %%b in (`Findstr /mic:"%~1" "%%a"`) do (
           echo(&amp;Echo Replacing "%~1" with "%~2" in file %%~nxa
           &lt;%%a cscript //nologo %temp%\_.vbs "%~1" "%~2">%tmp%
           if exist %tmp% move /Y %tmp% "%%~dpnxa">nul
         )
       )

       del %temp%\_.vbs

       :MakeReplace
       >%temp%\_.vbs echo with Wscript
       >>%temp%\_.vbs echo set args=.arguments
       >>%temp%\_.vbs echo .StdOut.Write _
       >>%temp%\_.vbs echo Replace(.StdIn.ReadAll,args(0),args(1),1,-1,1)
       >>%temp%\_.vbs echo end with
    </file></replstr></findstr>

    Rename

    ren  "[HorribleSubs]*.mkv" "[hasu]*.mkv"
    ren  *[*  *[1090p][Eng Sub].*
    call "1080.bat"
    move [hasu]*.* ..\Trash
  • Android sws_scale RGB Frame taking long time and cause of latency in the video

    17 avril 2018, par AJit

    I am reading frames from FFMPEG and try to directly draw to surface window in Native android, If i scale image to exact size what we are getting from camera and apply YUV420P to RGBA it take 1ms to scale through av_image_fill_arrays but if i try to scale image what i need for surface, then i take 25 to 30ms to scale same frame. so latency is the problem.

    In below example getting YUV420P pixel format.

    Low latency :[ 1ms by sws_scale]

    swsContext = sws_getContext(videoCodecContext->width,
                                       videoCodecContext->height,
                                       videoCodecContext->pix_fmt,
                                       videoCodecContext->width,
                                       videoCodecContext->height,
                                       AV_PIX_FMT_RGB0,
                                       SWS_FAST_BILINEAR, NULL, NULL, NULL);
    av_image_fill_arrays()
    av_read_frame()
    avcodec_decode_video2(
    sws_scale(swsContext,
             (const uint8_t *const *) videoFrame->data,
             videoFrame->linesize,
             0,
             videoCodecContext->height,
             pictureFrame->data,
             pictureFrame->linesize);
    ANativeWindow_lock()
    Write all buffer bytes to window.
    ANativeWindow_unlockAndPost()

    Low latency :[ 30ms by sws_scale]

    [videoContext Width: 848 Height: 608]
    swsContext = sws_getContext(videoCodecContext->width,
                                       videoCodecContext->height,
                                       videoCodecContext->pix_fmt,
                                       1080,
                                       608,
                                       AV_PIX_FMT_RGB0,
                                       SWS_FAST_BILINEAR, NULL, NULL, NULL);

    Whenever we change context width and height other then videoContext it will take more then 30ms and we are delaying the video.

    TRY 1 : Pass the buffer from JNI to Java and create bitmap there to scale later on but createBitmap itself take 500ms so no useful.

    TRY 2 : Direct YUV420P to RGB conversion. still long time then sws_scale.

    TRY 3 : Direct write YUV to window bites but it show without color(If anyone have solution here will might helpful).

    TRY 4 : Use yuvlib, no luck.

    TRY 5 : Different color combinations and flags in swsContext.

    Any help would appreciated.