
Recherche avancée
Médias (3)
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (107)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation" -
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)
Sur d’autres sites (9589)
-
ffmpeg rtsp protocol not found
8 janvier 2021, par fadeThe bug :
(base) [xc@localhost test]$ python rtsp2rtmp.py
command : ffmpeg -y -hwaccel_output_format cuda -hwaccel cuvid -probesize 42M -hwaccel_device 2 -f rawvideo -c:v hevc_cuvid -pix_fmt bgr24 -s 1920x1080 -r 25 -i rtsp ://admin:scut123456@192.168.1.43:554/Streaming/Channels/1 -c:v h264_nvenc -pix_fmt yuv420p -f flv rtmp ://192.168.1.23:1935/live/cam1
ffmpeg version 4.2 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7 (GCC)
configuration : —enable-nonfree —enable-cuda-nvcc —enable-libnpp —enable-shared —enable-cuda —enable-cuvid —enable-nvenc —extra-cflags=-I/usr/local/cuda/include —extra-ldflags=-L/usr/local/cuda/lib64
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
rtsp ://admin:scut123456@192.168.1.43:554/Streaming/Channels/1 : Protocol not found
Did you mean file:rtsp ://admin:scut123456@192.168.1.43:554/Streaming/Channels/1 ?


The wrong code :
rtsp2rtmp.py


import time
import cv2
import subprocess as sp
import os

if __name__ == "__main__":
 sources = [
 "rtsp://admin:scut123456@192.168.1.43:554/Streaming/Channels/1"
 ]
 rtmpUrl_base = "rtmp://192.168.1.23:1935/live/cam"
 fps = '25'
 solution = (1920, 1080)
 rtmpUrl = rtmpUrl_base+str(1)
 command = ['ffmpeg',
 '-y',
 '-hwaccel_output_format', 'cuda',
 '-hwaccel', 'cuvid',
 '-probesize','42M',
 '-hwaccel_device', '2',
 '-f', 'rawvideo',
 '-c:v','hevc_cuvid',
 '-pix_fmt', 'bgr24',
 # '--enable-network', '--enable-protocol=tcp' ,'--enable-demuxer=rtsp' ,'--enable-decoder=h264',
 '-s', "{}x{}".format(solution[0], solution[1]), # 图片分辨率
 '-r', fps, # 视频帧率
 '-i', sources[0],
 # '-vf', 'scale_npp = {}:-1'.format(basic_data[0]), # 图片分辨率
 '-c:v', 'h264_nvenc',
 '-pix_fmt', 'yuv420p',
 '-f', 'flv',
 #'-pre','superfast',
 rtmpUrl]
 cmdString=' '.join(command)
 print("command:",cmdString)
 os.popen(cmd=cmdString);



This py runs on
centos7


However When I run the code below :
It is ok.


import time
import cv2
import subprocess as sp


def read(source, command):
 cap = cv2.VideoCapture(source)
 count = 0
 tick = time.time()
 push_p = sp.Popen(command, stdin=sp.PIPE)
 while cap.isOpened():
 _, frame = cap.read()
 # imgs[0] = frame
 if frame is None:
 continue 
 count += 1
 # print(frame.tostring())
 # pipe = sp.Popen(command, stdin=sp.PIPE, env=my_env)
 #print(push_p.poll())
 # print(frame.shape)
 push_p.stdin.write(frame.tostring())
 if count % 25 == 0:
 tock = time.time() 
 print('Read 25 frames in', tock - tick)
 tick = tock


if __name__ == '__main__':
 sources = [
 'rtsp://admin:scut123456@192.168.1.43:554/Streaming/Channels/1',
 ]
 rtmpUrl_base = "rtmp://192.168.1.23:1935/live/cam"
 fps = '25'
 solution = (1920, 1080)
 rtmpUrl = rtmpUrl_base+str(1)
 command = ['ffmpeg',
 '-hwaccel_output_format', 'cuda',
 '-hwaccel', 'cuvid',
 '-probesize','42M',
 '-hwaccel_device', '2',
 '-f', 'rawvideo','-re',
 # '-vcodec','hevc_cuvid',
 '-pix_fmt', 'bgr24',
 '-s', "{}x{}".format(solution[0], solution[1]), # 图片分辨率
 '-r', fps, # 视频帧率
 '-i', '-',
 # '-vf', 'scale_npp = {}:-1'.format(basic_data[0]), # 图片分辨率
 '-c:v', 'h264_nvenc',
 '-pix_fmt', 'yuv420p',
 '-f', 'flv',
 #'-pre','superfast',
 rtmpUrl]
 print("command: ",command)
 print('');
 read(sources[0], command)



-
How do I prevent ffmpeg or gstreamer from terminating ?
30 mai 2021, par mohammad hasaniI'm trying to get some metadata (VCA data) from ONVIF and I'm using GStreamer for that.
it works fine and it gives me XML data but if it remains idle for 20 seconds I get this error :
Copy Code
Got EOS from element "pipeline0"


and the GStreamer gets close. how can I stop EOS ? is it coming from the camera or GStreamer ?


I'm using this command :
Copy Code




gst-launch-1.0.exe rtspsrc location=rtsp://username:password@ip/rtsp_tunnel?vcd=2 ! application/x-rtp, media=application ! fakesink dump=true







And this code :




from threading import Thread
from time import sleep
import gi
import sys
import xmltodict


def xml_parser(data):
 try:
 print('ok')
 doc = xmltodict.parse(data)
 doc = doc['tt:MetadataStream']['tt:VideoAnalytics']['tt:Frame']

 _utc_time = doc['@UtcTime']
 _object_id = doc['tt:Object']['@ObjectId']

 doc = doc['tt:Object']['tt:Appearance']['tt:Shape']

 _bottom = doc['tt:BoundingBox']['@bottom']
 _top = doc['tt:BoundingBox']['@top']
 _right = doc['tt:BoundingBox']['@right']
 _left = doc['tt:BoundingBox']['@left']

 _center_of_gravity_x = doc['tt:CenterOfGravity']['@x']
 _center_of_gravity_y = doc['tt:CenterOfGravity']['@y']
 doc = doc['tt:Polygon']['tt:Point']
 points = list()
 points.append(_utc_time)
 points.append(_object_id)
 points.append(_bottom)
 points.append(_top)
 points.append(_right)
 points.append(_left)
 points.append(_center_of_gravity_x)
 points.append(_center_of_gravity_y)
 for point in doc:
 x = point['@x']
 y = point['@y']
 # points.append((x, y))
 points.append(x)
 points.append(y)
 with open('points.txt', 'a+') as f:
 f.write(', '.join(points))
 f.write('\r\n')
 print(points)
 
 except:
 pass


def on_new_buffer(p1):
 sample = p1.try_pull_sample(Gst.SECOND)
 if sample is None:
 return
 buffer = sample.get_buffer()
 buffer=buffer.extract_dup(0, buffer.get_size())
 print(buffer)




uri = 'rtsp://username:password@ip/rtsp_tunnel?vcd=2'

gi.require_version("Gst", "1.0")
gi.require_version("GstApp", "1.0")

from gi.repository import Gst, GstApp, GLib

Gst.init()

main_loop = GLib.MainLoop()
main_loop_thread = Thread(target=main_loop.run)
main_loop_thread.start()

pipeline = Gst.parse_launch("rtspsrc location=rtsp://service:tam!dhc0@172.16.130.110/rtsp_tunnel?vcd=2 ! application/x-rtp, media=application ! appsink emit-signals=True name=sink")
pipeline.set_delay(0)
pipeline.set_latency(0)

sink = pipeline.get_by_name('sink')
#sink.connect('new_sample', on_new_buffer)
#sink.set_property('emit-signals', True)
pipeline.set_state(Gst.State.PLAYING)
data = list()

try:
 while True:
 sample = sink.try_pull_sample(Gst.SECOND)
 print(sample)
 if sample is None:
 continue
 print('okkk')
 buffer = sample.get_buffer()
 buffer = buffer.extract_dup(0, buffer.get_size())
 buffer = buffer[12:].decode()
 data.append(buffer)
 if buffer.endswith(''):
 data = ''.join(data)
 xml_parser(data)
 data = list()
except Exception as e:
 print(e)

main_loop.quit()
main_loop_thread.join()
print('end')







-
avfilter/vf_dnn_processing : add a generic filter for image proccessing with dnn networks
31 octobre 2019, par Guo, Yejunavfilter/vf_dnn_processing : add a generic filter for image proccessing with dnn networks
This filter accepts all the dnn networks which do image processing.
Currently, frame with formats rgb24 and bgr24 are supported. Other
formats such as gray and YUV will be supported next. The dnn network
can accept data in float32 or uint8 format. And the dnn network can
change frame size.The following is a python script to halve the value of the first
channel of the pixel. It demos how to setup and execute dnn model
with python+tensorflow. It also generates .pb file which will be
used by ffmpeg.import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('in.bmp')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
filter_data = np.array([0.5, 0, 0, 0, 1., 0, 0, 0, 1.]).reshape(1,1,3,3).astype(np.float32)
filter = tf.Variable(filter_data)
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
y = tf.nn.conv2d(x, filter, strides=[1, 1, 1, 1], padding='VALID', name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
output = sess.run(y, feed_dict=x : in_data)
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'halve_first_channel.pb', as_text=False)
output = output * 255.0
output = output.astype(np.uint8)
imageio.imsave("out.bmp", np.squeeze(output))To do the same thing with ffmpeg :
generate halve_first_channel.pb with the above script
generate halve_first_channel.model with tools/python/convert.py
try with following commands
./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=native -y out.native.png
./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.pb:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=tensorflow -y out.tf.pngSigned-off-by : Guo, Yejun <yejun.guo@intel.com>
Signed-off-by : Pedro Arthur <bygrandao@gmail.com>