Recherche avancée

Médias (91)

Autres articles (48)

  • MediaSPIP Player : les contrôles

    26 mai 2010, par

    Les contrôles à la souris du lecteur
    En plus des actions au click sur les boutons visibles de l’interface du lecteur, il est également possible d’effectuer d’autres actions grâce à la souris : Click : en cliquant sur la vidéo ou sur le logo du son, celui ci se mettra en lecture ou en pause en fonction de son état actuel ; Molette (roulement) : en plaçant la souris sur l’espace utilisé par le média (hover), la molette de la souris n’exerce plus l’effet habituel de scroll de la page, mais diminue ou (...)

  • L’agrémenter visuellement

    10 avril 2011

    MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
    Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (5254)

  • IP camera livestream on server

    1er juillet 2015, par kunsti

    i have written an ASP.NET MVC Application.
    On the website is there are some ipcameras streams.
    They are inbounded with an image tag

    <img height='169' width='300' style='border:1px solid' src="http://xx.xx.xx.xxx/nphMotionJpeg?Resolution=320x180&amp;Quality=Standard" alt="Kamerastream" />

    Because of some changes of the accesslist it is not possibile for some users to see these streams anymore.

    The Windows server (a virtual machine) can proceed to this camera streams.
    The users which are not able to see the streams can reach the server.
    So my idea was to provide the stream by server instead of front end.
    I tried to do it via iis live smooth streaming.

    But I wasn’t able to find an introduction which I was able to understand.
    The one I unterstand is with Expression Encoder pro, which is not available for download anymore.

    I have also found FFMPEG but I do not no how tow ork with it.
    Is anyone out there who could explain it to me, knows a good tutorial or knows a better way to do what I want.

  • FFMPEG - Adding a full-frame image to the beginning of a video

    28 mars 2014, par user3470655

    I have tried many different ways to try to create a 10 second video file out of an image file and have used all the same switches and codecs as I used to encode my video file. However, when I concat the two using anything but complex_filter (which forces the video through another round of transcoding), the resulting video file is corrupt. I believe this is due to the inherent differences of the 10 second clip that ffmpeg created from the image, but there must be some way to get it to encode the exact same way as my video file.

    Here is the command I am using to turn the image into a 10s video clip (I added a silent mp3 because I thought that an audio stream starting partway through the video was messing things up) :

    ffmpeg -loop 1 -i splash.jpg -i silence.mp3 -c:v libx264 -preset slow -g 60 -r 29.97 -crf 16 -c:a libfdk_aac -b:a 256k -cutoff 18000 -t 5 tmpoutput1.mp4

    Here is the command I am using to encode my video :

    ffmpeg -i input.f4v -c:v libx264 -preset slow -g 60 -r 29.97 -crf 16 -c:a libfdk_aac -b:a 256k -cutoff 18000 tmpoutput2.mp4

    Here is the command I use to convert both of them to .ts to get ready for concat :

    ffmpeg -i tmpoutput1.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts tmpoutput1.ts

    And finally the concat (which is where I get crazy video corruption, everything along the way looks fine) :

    ffmpeg -i "concat:tmpoutput1.ts|tmpoutput2.ts" -c copy output.mp4

    Again, the issue is that I'm already transcoding everything once and I should be able to get it to transcode in a similar enough structure so that it can be concatenated without another transcode tacked onto the end.

    Has anyone successfully added a full-frame splash graphic to the front of a video with ffmpeg before ? I am using a brand new cross-compile of ffmpeg as I thought that might be the issue, but alas, the issue persists after the update.

    Thanks !

  • FFmpeg Libraries : How Do I Convert a Video to 4 Level Grayscale Video at 160x100 Pixels and as Raw Data

    15 avril 2014, par pimathbrainiac

    This seems like an odd question, so I am going to start from the beginning :

    I am a calculator programmer, specifically a Ti-89 programmer, and I'm trying to make a video player for it. I have the code for the calculator side done already, but I need to be able to convert video to a specific format that works as follows :

    Each frame is 4 level grayscale, where there are 2 bytes for every 8 horizontal pixels, a "back," or dark byte, and a "front," or light byte. Basically, every frame is 2 monochrome binary images, with the first byte of the first one is followed by the first byte of the second one, so data is stored as follows :

    FB(front buffer)[1],BB(back buffer)[1],FB[2],BB[2], etc...

    Here's how the display works : (defining 0 as white and 3 as black, with 1 and 2 being in-between shades)
    Each front buffer bit is worth 1, so if the front buffer were displayed with nothing in the back buffer, the "1" (or true) bits would show up as light gray pixels. These are then added to the back buffer bits, which are worth 2 (so if the back buffer were displayed with nothing in the front buffer, the "1" (or true) bits would show up as dark gray pixels) to get :

    "0" bit on either buffer = white pixel (0+0=0)

    "1" bit on front buffer but not back buffer = light gray pixel (1+0=1)

    "1" bit on back buffer but not front buffer = dark gray pixel (0+2=2)

    "1" bit on both buffers = black pixel (1+2=3)

    I have this down calculator-side, but I need to know : 1) How to convert the video to a specific framerate using the libs 2) How to convert these frames to 4-level grayscale at 160x100 pixels and 3) How to save these frames as raw data in the format I described. Thank you in advance for your answer(s).