Recherche avancée

Médias (91)

Autres articles (53)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • L’utiliser, en parler, le critiquer

    10 avril 2011

    La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
    Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
    Une liste de discussion est disponible pour tout échange entre utilisateurs.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (9892)

  • Revision 1306c1b09b : Masked Compound Inter Prediction The masked compound motion compensation has ma

    9 août 2013, par Yue Chen

    Changed Paths :
     Modify /configure


     Modify /vp9/common/vp9_blockd.h


     Modify /vp9/common/vp9_entropymode.c


     Modify /vp9/common/vp9_entropymode.h


     Modify /vp9/common/vp9_onyxc_int.h


     Modify /vp9/common/vp9_reconinter.c


     Modify /vp9/common/vp9_reconinter.h


     Modify /vp9/common/vp9_rtcd_defs.sh


     Modify /vp9/common/vp9_sadmxn.h


     Modify /vp9/common/vp9_subpelvar.h


     Modify /vp9/decoder/vp9_decodemv.c


     Modify /vp9/decoder/vp9_treereader.h


     Modify /vp9/encoder/vp9_bitstream.c


     Modify /vp9/encoder/vp9_encodeframe.c


     Modify /vp9/encoder/vp9_mcomp.c


     Modify /vp9/encoder/vp9_mcomp.h


     Modify /vp9/encoder/vp9_onyx_if.c


     Modify /vp9/encoder/vp9_onyx_int.h


     Modify /vp9/encoder/vp9_ratectrl.c


     Modify /vp9/encoder/vp9_rdopt.c


     Modify /vp9/encoder/vp9_sad_c.c


     Modify /vp9/encoder/vp9_subexp.c


     Modify /vp9/encoder/vp9_subexp.h


     Modify /vp9/encoder/vp9_variance.h


     Modify /vp9/encoder/vp9_variance_c.c



    Masked Compound Inter Prediction

    The masked compound motion compensation has mask types separating a
    block into wedges at specific angles and offsets. The mask is used to
    weight pixels from the first and second predictors to obtain the final
    predictor. The weighting is smooth near the partition boundaries but
    becomes a selecton farther away.

    Bit-rate reduction : +0.960%(derfraw300) +0.651%(stdhdraw250)

    Change-Id : I1327d22d3fc585b72ffa0e03abd90f3980f0876a

  • Does anybody know how to concat on ffmpeg and python ? [closed]

    22 décembre 2023, par Jonas Harker

    I created a code on Python that supposedly concatenates two videos using Python to run an ffmpeg command through subprocess.
I get this error message :

    


    2c0] Could not open encoder before EOF
[vost#0:0/libx264 @ 0000024013bfa480] Task finished with error code: -22 (Invalid argument)
[aost#0:1/aac @ 0000024013bfb2c0] Task finished with error code: -22 (Invalid argument)
[aost#0:1/aac @ 0000024013bfb2c0] Terminating thread with return code -22 (Invalid argument)
[vost#0:0/libx264 @ 0000024013bfa480] Terminating thread with return code -22 (Invalid argument)
[out#0/mp4 @ 0000024013b91d80] Nothing was written into output file, because at least one of its streams received no packets.
frame=    0 fps=0.0 q=0.0 Lsize=       0kB time=N/A bitrate=N/A speed=N/A    
Conversion failed!
El archivo Batman - Mask of The Phantasm, Open Matte Version (1993 - 1080p BluRay)_segment_11_segment_2.mp4 ha sido procesado.
El archivo temporal output_temp.mp4 ha reemplazado a Batman - Mask of The Phantasm, Open Matte Version (1993 - 1080p BluRay)_segment_11_segment_2.mp4.
Ejecutando: ffmpeg -i "D:\BTO\MemeLaughnClap\Shorted\Batman - Mask of The Phantasm, Open Matte Version (1993 - 1080p BluRay)_segment_11_segment_3.mp4" -i "D:\BTO\MemeLaughnClap\Media\FV.mp4" -filter_complex concat=n=2:v=1:a=1 -c:v libx264 -preset medium -crf 23 -c:a aac -b:a 192k -y "D:\BTO\MemeLaughnClap\Shorted\output_temp.mp4"
ffmpeg version 2023-12-18-git-be8a4f80b9-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 12.2.0 (Rev10, Built by MSYS2 project)


    


    I hoped they would concat, but they did not.

    


  • Remove random background from video using ffmpeg or Python

    20 avril 2024, par Raheel Shahzad

    I want to remove background from a person's video using ffmpeg or Python. If I record a video at any place, detect the person in the video and then remove anything except that person. Not asking for green or single color background as that can be done through chromakey and I am not looking for that.

    



    I've tried this (https://tryolabs.com/blog/2018/04/17/announcing-luminoth-0-1/) approach but it is giving me output of rectangular box. It is informative enough as area to explore is narrow down enough but still need to remove total background.
I've also tried grabcut (https://docs.opencv.org/4.1.0/d8/d83/tutorial_py_grabcut.html) but that need user interaction otherwise result isn't too good.
I've also tried to use ffmpeg and found this example (http://oioiiooixiii.blogspot.com/2016/09/ffmpeg-extract-foreground-moving.html) but it needs still image so I tried to take background picture before recording video with a person but there are many things required to take difference between background image and video frame.

    



    For opencv approach, I've tried this.

    



    img = cv.imread('pic.png')
mask = np.zeros(img.shape[:2], np.uint8)
bgdModel = np.zeros((1, 65), np.float64)
fgdModel = np.zeros((1, 65), np.float64)
rect = (39, 355, 1977, 2638)
cv.grabCut(img, mask, rect, bgdModel, fgdModel, 5, cv.GC_INIT_WITH_RECT)
mask2 = np.where((mask==2)|(mask==0), 0, 1).astype('uint8')
img = img*mask2[:, :, np.newaxis]
plt.imshow(img), plt.colorbar(), plt.show()


    



    But it is removing some of person's part too.
Also tried ffmpeg way but not a good result.

    



    ffmpeg -report -y -i "img.jpg" -i "vid.mov" -filter_complex "[1:v]format=yuva444p,lut=c3=128[video2withAlpha],[0:v][video2withAlpha]blend=all_mode=difference[out]" -map "[out]" "output.mp4"


    



    All I need is just a person's image/video take under any normal background without user interaction like area selection or any other thing like that. Luminoth has trained data but that is giving box of person not exact person so that I can remove. Any help or guidance to remove background will be appreciated.