
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (25)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 is the first MediaSPIP stable release.
Its official release date is June 21, 2013 and is announced here.
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)
Sur d’autres sites (8858)
-
FFMPEG - Moving text to appear every 'X' Seconds
23 septembre 2015, par KevinThis is a ffmpeg command for moving text (left to right)
ffmpeg -i input.mp4 -vf drawtext="fontfile=/path/to/fonts/FreeSans.ttf:text='Hello World':fontcolor=white@1.0:fontsize=16:y=h-line_h-100:x=(2*n)-tw" -codec:v libx264 -codec:a copy output.mp4
And I would like to know how to make the moving text to start after ’X’ seconds and appear every ’X’ seconds ?
-
ffmpeg : videos before and after conversion aren't the same length
16 juillet 2012, par KoyotI have a set of .mov videos which require conversion to .mp4 format. I'm using ffmpeg and running this command :
ffmpeg -i Banking.mov -vsync -async -sameq -ac 1 -ab 64k -ar 44100 Banking.mp4
There is a slight difference between input and output video in time length (00:03:35.407 and 00:03:35.582). And here's the catch - I'm storing time cue set at precise times in a file which is used by a program to point at specific scenes. The 0.1 second difference causes it to point at wrong scenes, therefore making the cue set useless. Is there any possibility to preserve exactly the same time in different format ?
FFmpeg version CVS, Copyright (c) 2000-2004 Fabrice BellardMac OSX universal build for ffmpegX
configuration: --enable-memalign-hack --enable-mp3lame --enable-gpl --disable-vhook --disable-ffplay --disable-ffserver --enable-a52 --enable-xvid --enable-faac --enable-faad --enable-amr_nb --enable-amr_wb --enable-pthreads --enable-x264
libavutil version: 49.0.0
libavcodec version: 51.9.0
libavformat version: 50.4.0
built on Apr 15 2006 04:58:19, gcc: 4.0.1 (Apple Computer, Inc. build 5250)
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x5597b8]negative ctts, ignoring
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Banking.mov':
Duration: 00:03:35.6, start: 0.000000, bitrate: 1400 kb/s
Stream #0.0(eng): Audio: pcm_s16be, 24000 Hz, stereo, 768 kb/s
Stream #0.1(eng), 29.97 fps(r): Video: h264, yuv420p, 720x480
Output #0, mp4, to 'Banking.mp4':
Stream #0.0, 29.97 fps(c): Video: mpeg4, yuv420p, 720x480, q=2-31, 200 kb/s
Stream #0.1: Audio: aac, 44100 Hz, mono, 64 kb/s
Stream mapping:
Stream #0.1 -> #0.0
Stream #0.0 -> #0.1
Press [q] to stop encoding
frame= 6461 q=0.0 Lsize= 53181kB time=215.3 bitrate=2023.3kbits/s
video:51437kB audio:1618kB global headers:0kB muxing overhead 0.237816% -
Video Conferencing in HTML5 : WebRTC via Web Sockets
14 juin 2012, par silviaA bit over a week ago I gave a presentation at Web Directions Code 2012 in Melbourne. Maxine and John asked me to speak about something related to HTML5 video, so I went for the new shiny : WebRTC – real-time communication in the browser.
I only had 20 min, so I had to make it tight. I wanted to show off video conferencing without special plugins in Google Chrome in just a few lines of code, as is the promise of WebRTC. To a large extent, I achieved this. But I made some interesting discoveries along the way. Demos are in the slide deck.
UPDATE : Opera 12 has been released with WebRTC support.
Housekeeping : if you want to replicate what I have done, you need to install a Google Chrome Web Browser 19+. Then make sure you go to chrome ://flags and activate the MediaStream and PeerConnection experiment(s). Restart your browser and now you can experiment with this feature. Big warning up-front : it’s not production-ready, since there are still changes happening to the spec and there is no compatible implementation by another browser yet.
Here is a brief summary of the steps involved to set up video conferencing in your browser :
- Set up a video element each for the local and the remote video stream.
- Grab the local camera and stream it to the first video element.
- (*) Establish a connection to another person running the same Web page.
- Send the local camera stream on that peer connection.
- Accept the remote camera stream into the second video element.
Now, the most difficult part of all of this – believe it or not – is the signalling part that is required to build the peer connection (marked with (*)). Initially I wanted to run completely without a server and just enter the remote’s IP address to establish the connection. This is, however, not a functionality that the PeerConnection object provides [might this be something to add to the spec ?].
So, you need a server known to both parties that can provide for the handshake to set up the connection. All the examples that I have seen, such as https://apprtc.appspot.com/, use a channel management server on Google’s appengine. I wanted it all working with HTML5 technology, so I decided to use a Web Socket server instead.
I implemented my Web Socket server using node.js (code of websocket server). The video conferencing demo is in the slide deck in an iframe – you can also use the stand-alone html page. Works like a treat.
While it is still using Google’s STUN server to get through NAT, the messaging for setting up the connection is running completely through the Web Socket server. The messages that get exchanged are plain SDP message packets with a session ID. There are OFFER, ANSWER, and OK packets exchanged for each streaming direction. You can see some of it in the below image :
I’m not running a public WebSocket server, so you won’t be able to see this part of the presentation working. But the local loopback video should work.
At the conference, it all went without a hitch (while the wireless played along). I believe you have to host the WebSocket server on the same machine as the Web page, otherwise it won’t work for security reasons.
A whole new world of opportunities lies out there when we get the ability to set up video conferencing on every Web page – scary and exciting at the same time !