
Recherche avancée
Autres articles (59)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Submit enhancements and plugins
13 avril 2011If you have developed a new extension to add one or more useful features to MediaSPIP, let us know and its integration into the core MedisSPIP functionality will be considered.
You can use the development discussion list to request for help with creating a plugin. As MediaSPIP is based on SPIP - or you can use the SPIP discussion list SPIP-Zone.
Sur d’autres sites (8013)
-
Concatenating image with video for Freeze frame effect
12 mai 2015, par Code_Ed_StudentI am currently trying to achieve with ffmepg a
freeze frame effect
. This is something easy to do with adobe after effects shown here. However I would like to achieve a freeze frame effect(of 5 seconds duration) followed by the 15 second video for the final output video. This should amount to a final duration of 20 seconds. However with the settings below, I am getting a still image with the video following but it does not show a "freeze frame effect". How can I achieve a "freeze frame effect" in ffmpeg ?//create image
ffmpeg -i "/media/test/test.mp4" -ss 00:00:00.023222 -vframes 1 "/media/test/test.png"
//create freeze image effect
ffmpeg -i "/media/test/test.mp4" -loop 1 -i "/media/test/test.jpg" -an \
-filter_complex "[1:v]trim=start=0:end=5[ol];[0:v]setpts=[nv];[nv][ol]overlay=eof_action=pass[final]" \
-map '[final]' -c:a aac -strict experimental -c:v libx264 -q 1 "/media/test/test_effect.mp4" -
ADD Image overlay to ffmpeg video stream
1er juillet 2017, par ChrisI am new to ffmpeg and want to add an HUD to the video stream, so a few questions.
- What file do I need to edit.
- What do I need to do to achieve this.
Thanks in advance. Also I am VERY new to all of this, I will need instructions step by step
I saw other questions saying to add this :
ffmpeg -n -i video.mp4 -i logo.png -filter_complex "[0:v]setsar=sar=1[v];[v][1]blend=all_mode='overlay':all_opacity=0.7" -movflags +faststart tmb/video.mp4
But I dont know where to put it, i entered it in the terminal and got this :
pi@raspberrypi:~ $ ffmpeg -n -i video.mp4 -i logo.png -filter_complex "[0:v]setsar=sar=1[v];[v][1]blend=all_mode='overlay':all_opacity=0.7" -movflags +faststart tmb/video.mp4
ffmpeg version N-86215-gb5228e4 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.9.2 (Raspbian 4.9.2-10)
configuration: --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree --extra-libs=-ldl
libavutil 55. 63.100 / 55. 63.100
libavcodec 57. 96.101 / 57. 96.101
libavformat 57. 72.101 / 57. 72.101
libavdevice 57. 7.100 / 57. 7.100
libavfilter 6. 90.100 / 6. 90.100
libswscale 4. 7.101 / 4. 7.101
libswresample 2. 8.100 / 2. 8.100
libpostproc 54. 6.100 / 54. 6.100
video.mp4: No such file or directoryI dont understand what i am supposed to do with the video.mp4 ?
HERE IS THE SCRIPT THAT SENDS THE VIDEO.
import subprocess
import shlex
import re
import os
import time
import urllib2
import platform
import json
import sys
import base64
import random
import argparse
parser = argparse.ArgumentParser(description='robot control')
parser.add_argument('camera_id')
parser.add_argument('video_device_number', default=0, type=int)
parser.add_argument('--kbps', default=450, type=int)
parser.add_argument('--brightness', default=75, type=int, help='camera brightness')
parser.add_argument('--contrast', default=75, type=int, help='camera contrast')
parser.add_argument('--saturation', default=15, type=int, help='camera saturation')
parser.add_argument('--rotate180', default=False, type=bool, help='rotate image 180 degrees')
parser.add_argument('--env', default="prod")
args = parser.parse_args()
server = "runmyrobot.com"
#server = "52.52.213.92"
from socketIO_client import SocketIO, LoggingNamespace
# enable raspicam driver in case a raspicam is being used
os.system("sudo modprobe bcm2835-v4l2")
if args.env == "dev":
print "using dev port 8122"
port = 8122
elif args.env == "prod":
print "using prod port 8022"
port = 8022
else:
print "invalid environment"
sys.exit(0)
print "initializing socket io"
print "server:", server
print "port:", port
socketIO = SocketIO(server, port, LoggingNamespace)
print "finished initializing socket io"
#ffmpeg -f qtkit -i 0 -f mpeg1video -b 400k -r 30 -s 320x240 http://52.8.81.124:8082/hello/320/240/
def onHandleCameraCommand(*args):
#thread.start_new_thread(handle_command, args)
print args
socketIO.on('command_to_camera', onHandleCameraCommand)
def onHandleTakeSnapshotCommand(*args):
print "taking snapshot"
inputDeviceID = streamProcessDict['device_answer']
snapShot(platform.system(), inputDeviceID)
with open ("snapshot.jpg", 'rb') as f:
data = f.read()
print "emit"
socketIO.emit('snapshot', {'image':base64.b64encode(data)})
socketIO.on('take_snapshot_command', onHandleTakeSnapshotCommand)
def randomSleep():
"""A short wait is good for quick recovery, but sometimes a longer delay is needed or it will just keep trying and failing short intervals, like because the system thinks the port is still in use and every retry makes the system think it's still in use. So, this has a high likelihood of picking a short interval, but will pick a long one sometimes."""
timeToWait = random.choice((0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 5))
print "sleeping", timeToWait
time.sleep(timeToWait)
def getVideoPort():
url = 'http://%s/get_video_port/%s' % (server, cameraIDAnswer)
for retryNumber in range(2000):
try:
print "GET", url
response = urllib2.urlopen(url).read()
break
except:
print "could not open url ", url
time.sleep(2)
return json.loads(response)['mpeg_stream_port']
def getAudioPort():
url = 'http://%s/get_audio_port/%s' % (server, cameraIDAnswer)
for retryNumber in range(2000):
try:
print "GET", url
response = urllib2.urlopen(url).read()
break
except:
print "could not open url ", url
time.sleep(2)
return json.loads(response)['audio_stream_port']
def runFfmpeg(commandLine):
print commandLine
ffmpegProcess = subprocess.Popen(shlex.split(commandLine))
print "command started"
return ffmpegProcess
def handleDarwin(deviceNumber, videoPort, audioPort):
p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "qtkit", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
print err
deviceAnswer = raw_input("Enter the number of the camera device for your robot from the list above: ")
commandLine = 'ffmpeg -f qtkit -i %s -f mpeg1video -b 400k -r 30 -s 320x240 http://%s:%s/hello/320/240/' % (deviceAnswer, server, videoPort)
process = runFfmpeg(commandLine)
return {'process': process, 'device_answer': deviceAnswer}
def handleLinux(deviceNumber, videoPort, audioPort):
print "sleeping to give the camera time to start working"
randomSleep()
print "finished sleeping"
#p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "qtkit", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
#out, err = p.communicate()
#print err
os.system("v4l2-ctl -c brightness={brightness} -c contrast={contrast} -c saturation={saturation}".format(brightness=args.brightness,
contrast=args.contrast,
saturation=args.saturation))
if deviceNumber is None:
deviceAnswer = raw_input("Enter the number of the camera device for your robot: ")
else:
deviceAnswer = str(deviceNumber)
#commandLine = '/usr/local/bin/ffmpeg -s 320x240 -f video4linux2 -i /dev/video%s -f mpeg1video -b 1k -r 20 http://runmyrobot.com:%s/hello/320/240/' % (deviceAnswer, videoPort)
#commandLine = '/usr/local/bin/ffmpeg -s 640x480 -f video4linux2 -i /dev/video%s -f mpeg1video -b 150k -r 20 http://%s:%s/hello/640/480/' % (deviceAnswer, server, videoPort)
# For new JSMpeg
#commandLine = '/usr/local/bin/ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s -f mpegts -codec:v mpeg1video -s 640x480 -b:v 250k -bf 0 http://%s:%s/hello/640/480/' % (deviceAnswer, server, videoPort) # ClawDaddy
#commandLine = '/usr/local/bin/ffmpeg -s 1280x720 -f video4linux2 -i /dev/video%s -f mpeg1video -b 1k -r 20 http://runmyrobot.com:%s/hello/1280/720/' % (deviceAnswer, videoPort)
if args.rotate180:
rotationOption = "-vf transpose=2,transpose=2"
else:
rotationOption = ""
# video with audio
videoCommandLine = '/usr/local/bin/ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s %s -f mpegts -codec:v mpeg1video -s 640x480 -b:v %dk -bf 0 -muxdelay 0.001 http://%s:%s/hello/640/480/' % (deviceAnswer, rotationOption, args.kbps, server, videoPort)
audioCommandLine = '/usr/local/bin/ffmpeg -f alsa -ar 44100 -ac 1 -i hw:1 -f mpegts -codec:a mp2 -b:a 32k -muxdelay 0.001 http://%s:%s/hello/640/480/' % (server, audioPort)
print videoCommandLine
print audioCommandLine
videoProcess = runFfmpeg(videoCommandLine)
audioProcess = runFfmpeg(audioCommandLine)
return {'video_process': videoProcess, 'audioProcess': audioProcess, 'device_answer': deviceAnswer}
def handleWindows(deviceNumber, videoPort):
p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "dshow", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
lines = err.split('\n')
count = 0
devices = []
for line in lines:
#if "] \"" in line:
# print "line:", line
m = re.search('.*\\"(.*)\\"', line)
if m != None:
#print line
if m.group(1)[0:1] != '@':
print count, m.group(1)
devices.append(m.group(1))
count += 1
if deviceNumber is None:
deviceAnswer = raw_input("Enter the number of the camera device for your robot from the list above: ")
else:
deviceAnswer = str(deviceNumber)
device = devices[int(deviceAnswer)]
commandLine = 'ffmpeg -s 640x480 -f dshow -i video="%s" -f mpegts -codec:v mpeg1video -b 200k -r 20 http://%s:%s/hello/640/480/' % (device, server, videoPort)
process = runFfmpeg(commandLine)
return {'process': process, 'device_answer': device}
def handleWindowsScreenCapture(deviceNumber, videoPort):
p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "dshow", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
lines = err.split('\n')
count = 0
devices = []
for line in lines:
#if "] \"" in line:
# print "line:", line
m = re.search('.*\\"(.*)\\"', line)
if m != None:
#print line
if m.group(1)[0:1] != '@':
print count, m.group(1)
devices.append(m.group(1))
count += 1
if deviceNumber is None:
deviceAnswer = raw_input("Enter the number of the camera device for your robot from the list above: ")
else:
deviceAnswer = str(deviceNumber)
device = devices[int(deviceAnswer)]
commandLine = 'ffmpeg -f dshow -i video="screen-capture-recorder" -vf "scale=640:480" -f mpeg1video -b 50k -r 20 http://%s:%s/hello/640/480/' % (server, videoPort)
print "command line:", commandLine
process = runFfmpeg(commandLine)
return {'process': process, 'device_answer': device}
def snapShot(operatingSystem, inputDeviceID, filename="snapshot.jpg"):
try:
os.remove('snapshot.jpg')
except:
print "did not remove file"
commandLineDict = {
'Darwin': 'ffmpeg -y -f qtkit -i %s -vframes 1 %s' % (inputDeviceID, filename),
'Linux': '/usr/local/bin/ffmpeg -y -f video4linux2 -i /dev/video%s -vframes 1 -q:v 1000 -vf scale=320:240 %s' % (inputDeviceID, filename),
'Windows': 'ffmpeg -y -s 320x240 -f dshow -i video="%s" -vframes 1 %s' % (inputDeviceID, filename)}
print commandLineDict[operatingSystem]
os.system(commandLineDict[operatingSystem])
def startVideoCapture():
videoPort = getVideoPort()
audioPort = getAudioPort()
print "video port:", videoPort
print "audio port:", audioPort
#if len(sys.argv) >= 3:
# deviceNumber = sys.argv[2]
#else:
# deviceNumber = None
deviceNumber = args.video_device_number
result = None
if platform.system() == 'Darwin':
result = handleDarwin(deviceNumber, videoPort, audioPort)
elif platform.system() == 'Linux':
result = handleLinux(deviceNumber, videoPort, audioPort)
elif platform.system() == 'Windows':
#result = handleWindowsScreenCapture(deviceNumber, videoPort)
result = handleWindows(deviceNumber, videoPort, audioPort)
else:
print "unknown platform", platform.system()
return result
def timeInMilliseconds():
return int(round(time.time() * 1000))
def main():
print "main"
streamProcessDict = None
twitterSnapCount = 0
while True:
socketIO.emit('send_video_status', {'send_video_process_exists': True,
'camera_id':cameraIDAnswer})
if streamProcessDict is not None:
print "stopping previously running ffmpeg (needs to happen if this is not the first iteration)"
streamProcessDict['process'].kill()
print "starting process just to get device result" # this should be a separate function so you don't have to do this
streamProcessDict = startVideoCapture()
inputDeviceID = streamProcessDict['device_answer']
print "stopping video capture"
streamProcessDict['process'].kill()
#print "sleeping"
#time.sleep(3)
#frameCount = int(round(time.time() * 1000))
videoWithSnapshots = False
while videoWithSnapshots:
frameCount = timeInMilliseconds()
print "taking single frame image"
snapShot(platform.system(), inputDeviceID, filename="single_frame_image.jpg")
with open ("single_frame_image.jpg", 'rb') as f:
# every so many frames, post a snapshot to twitter
#if frameCount % 450 == 0:
if frameCount % 6000 == 0:
data = f.read()
print "emit"
socketIO.emit('snapshot', {'frame_count':frameCount, 'image':base64.b64encode(data)})
data = f.read()
print "emit"
socketIO.emit('single_frame_image', {'frame_count':frameCount, 'image':base64.b64encode(data)})
time.sleep(0)
#frameCount += 1
if False:
if platform.system() != 'Windows':
print "taking snapshot"
snapShot(platform.system(), inputDeviceID)
with open ("snapshot.jpg", 'rb') as f:
data = f.read()
print "emit"
# skip sending the first image because it's mostly black, maybe completely black
#todo: should find out why this black image happens
if twitterSnapCount > 0:
socketIO.emit('snapshot', {'image':base64.b64encode(data)})
print "starting video capture"
streamProcessDict = startVideoCapture()
# This loop counts out a delay that occurs between twitter snapshots.
# Every 50 seconds, it kills and restarts ffmpeg.
# Every 40 seconds, it sends a signal to the server indicating status of processes.
period = 2*60*60 # period in seconds between snaps
for count in range(period):
time.sleep(1)
if count % 20 == 0:
socketIO.emit('send_video_status', {'send_video_process_exists': True,
'camera_id':cameraIDAnswer})
if count % 40 == 30:
print "stopping video capture just in case it has reached a state where it's looping forever, not sending video, and not dying as a process, which can happen"
streamProcessDict['video_process'].kill()
streamProcessDict['audio_process'].kill()
time.sleep(1)
if count % 80 == 75:
print "send status about this process and its child process ffmpeg"
ffmpegProcessExists = streamProcessDict['process'].poll() is None
socketIO.emit('send_video_status', {'send_video_process_exists': True,
'ffmpeg_process_exists': ffmpegProcessExists,
'camera_id':cameraIDAnswer})
#if count % 190 == 180:
# print "reboot system in case the webcam is not working"
# os.system("sudo reboot")
# if the video stream process dies, restart it
if streamProcessDict['video_process'].poll() is not None or streamProcessDict['audio_process'].poll():
# wait before trying to start ffmpeg
print "ffmpeg process is dead, waiting before trying to restart"
randomSleep()
streamProcessDict = startVideoCapture()
twitterSnapCount += 1
if __name__ == "__main__":
#if len(sys.argv) > 1:
# cameraIDAnswer = sys.argv[1]
#else:
# cameraIDAnswer = raw_input("Enter the Camera ID for your robot, you can get it by pointing a browser to the runmyrobot server %s: " % server)
cameraIDAnswer = args.camera_id
main()ERROR :
ffmpeg -n -f mpegts -i http://54.183.232.63:12221 -i logo.png -filter_complex "[0:v]setsar=sar=1[v];[v][1]blend=all_mode='overlay':all_opacity=0.7" -movflags +faststart tmb/video.mp4
ffmpeg version N-86215-gb5228e4 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.9.2 (Raspbian 4.9.2-10)
configuration: --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree --extra-libs=-ldl
libavutil 55. 63.100 / 55. 63.100
libavcodec 57. 96.101 / 57. 96.101
libavformat 57. 72.101 / 57. 72.101
libavdevice 57. 7.100 / 57. 7.100
libavfilter 6. 90.100 / 6. 90.100
libswscale 4. 7.101 / 4. 7.101
libswresample 2. 8.100 / 2. 8.100
libpostproc 54. 6.100 / 54. 6.100
[mpegts @ 0x1a57390] Could not detect TS packet size, defaulting to non-FEC/DVHS
http://54.183.232.63:12221: could not find codec parameters -
frame rate converter to increase or decrease the fps count [on hold]
30 juin 2017, par SiddharthI am trying to create a frame rate converter, specifically in C++. the objective is to either decrease the frame rate of a Video segment(example 29.97 fps to 25 fps by deleting frames which are redundant or reused) or to increase the frame rate of a Video segment(25 fps to 30 fps by inserting frames to increase frame rate count). I would ideally like to load the video file, check its fps, then increase or decrease the fps respectively. Is there any way to achieve this with ffmpeg or opencv ?