
Recherche avancée
Médias (1)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (5)
-
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (3080)
-
ffmpeg transcode to live stream
14 septembre 2016, par brayancastropI need to display a ip camera stream in an html video tag, i have figured out how to transcode to a file from the rtsp stream like this
ffmpeg -i "rtsp://user:password@ip" -s 640x480 /tmp/output.mp4
now i need to be able to be able to live stream the rtsp input in a video tag like this
<video src="http://domain:port/output.mp4" autoplay="autoplay"></video>
I was trying to do something like this in my server (an ubuntu micro instance on amazon) in order to reproduce the video in the video tag but didn’t work
ffmpeg -i "rtsp://user:password@ip" -s 640x480 http://localhost:8080/stream.mp4
instead i got this log
[tcp @ 0x747b40] Connection to tcp://localhost:8080 failed: Connection refused
http://localhost:8080/stream.mp4: Connection refusedi don’t really understand what’s happening, not sure if it’s sending the output to that url or serving the output there and this, i’ve been checking the ffmpeg man docs but i didn’t find any example related to this use case and also other questiones like this one FFmpeg Stream Transcoding which is similar to my last try without success
btw, this is the camera i’m using DS-2CD2020F-I(W) - http://www.hikvision.com/en/Products_accessries_157_i5847.html
they offer an httppreview but it’s just an img tag source which updates but appears to be unstableThis is my first time trying to do something like this so any insight about how to achieve it will be really usefull and appreciated
-
How To Install FFMPEG on Elastic Beanstalk
13 avril 2017, par Nick LynchThis is not a duplicate, I have found one thread, and it is outdated and does not work :
Install ffmpeg on elastic beanstalk using ebextensions config.I have been trying to install this for some time, nothing seems to work.
Please share the config.yml that will make this work.I am using 64bit Amazon Linux 2016.03 v2.1.6 running PHP 7.0 on Elastic Beanstalk
My current file is
branch-defaults:
default:
environment: Default-Environment
master:
environment: Default-Environment
global:
application_name: "My First Elastic Beanstalk Application"
default_ec2_keyname: ~
default_platform: "64bit Amazon Linux 2016.03 v2.1.6 running PHP 7.0"
default_region: us-east-1
profile: eb-cli
sc: git
packages: ~
yum:
ImageMagick: []
ImageMagick-devel: []
commands:
01-wget:
command: "wget -O /tmp/ffmpeg.tar.gz http://ffmpeg.gusari.org/static/64bit/ffmpeg.static.64bit.2014-03-05.tar.gz"
02-mkdir:
command: "if [ ! -d /opt/ffmpeg ] ; then mkdir -p /opt/ffmpeg; fi"
03-tar:
command: "tar -xzf ffmpeg.tar.gz -C /opt/ffmpeg"
cwd: /tmp
04-ln:
command: "if [[ ! -f /usr/bin/ffmpeg ]] ; then ln -s /opt/ffmpeg/ffmpeg /usr/bin/ffmpeg; fi"
05-ln:
command: "if [[ ! -f /usr/bin/ffprobe ]] ; then ln -s /opt/ffmpeg/ffprobe /usr/bin/ffprobe; fi"
06-pecl:
command: "if [ `pecl list | grep imagick` ] ; then pecl install -f imagick; fi" -
Ream-time watermarking with MPEG-DASH
14 juillet 2016, par Calvin W.In the system, I want to add a unique watermark (e.g. IP address of client and time stamp) into the video that he/she want to watch.
But when I handled it with OpenCV, it spent 25 minute with a 15-min video. And I need to transcode to mp4 with ffmpeg.
Now I’m trying the watermark function of ffmpeg, bit it still needs some time.
It it possible to send the video to client side with MPEG-DASH while transcoding it with ffmpeg ?
System spec :(Amazon EC2 c3.xlarge)
Intel Xeon E5-2680 v2 (Ivy Bridge) - 4 vCPU
7.5G RAM
40GB SSD
Ubuntu 14.04 LTS
OpenCV2.4.13
ffmpeg 3.1.1code :
import cv2
import sys
import time
from datetime import datetime as dt
# frame of input video
fps = float(sys.argv[4])
# encode to AVC
fourcc = cv2.cv.CV_FOURCC('A', 'V', 'C', '1')
# transparency of text
alpha = 0.1
beta = 1 - alpha
# input video
cap = cv2.VideoCapture(sys.argv[3])
# current frame index, start from 0
frameIndex = 0
# get input video's width/height
width = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT))
# config output (error using .mp4)
out = cv2.VideoWriter('output.avi', fourcc, fps, (width, height))
# access time
timeStr = dt.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')
requestIP = sys.argv[1]
username = sys.argv[2]
text = "%s %s %s" % (requestIP, username, timeStr)
# start loading video
while(cap.isOpened()):
ret, frame = cap.read()
if ret:
# add text between 10s - 20s
if frameIndex > time10 and frameIndex < time20:
# clone a new frame to add text
overlay = frame.copy()
cv2.putText(overlay, text, (100, 100), cv2.FONT_HERSHEY_PLAIN, 0.5, (255, 255, 255))
# combine both frame and make text transparent
cv2.addWeighted(overlay, alpha, frame, beta, 0, frame)
# write frame to output
out.write(frame)
frameIndex += 1
# wait for next frame
if cap.get(cv2.cv.CV_CAP_PROP_POS_FRAMES) == cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT):
break
# End of video
# release
cap.release()
out.release()