
Recherche avancée
Médias (91)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
-
Les Miserables
4 juin 2012, par
Mis à jour : Février 2013
Langue : English
Type : Texte
-
Ne pas afficher certaines informations : page d’accueil
23 novembre 2011, par
Mis à jour : Novembre 2011
Langue : français
Type : Image
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
-
Richard Stallman et la révolution du logiciel libre - Une biographie autorisée (version epub)
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (106)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)
Sur d’autres sites (10172)
-
Live stream is gets delayed while processing frame in opencv + python
18 mars 2021, par Himanshu sharmaI capture and process an IP camera RTSP stream in a OpenCV 4.4.0.46 on Ubuntu.
Unfortunately the processing takes quite a lot of time, roughly 0.2s per frame, and the stream quickly gets delayed.
Video file have to save for 5 min but by this delaying video file is saved for 3-4 min only.


Can we process faster to overcome delays ?


I have two IP camera which have two diffrent fps_rate(Camera 1 have 18000 and camera 2 have 20 fps)


I am implementing this code in difference Ubuntu PCs


- 

- Python 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0] on linux
- Django==3.1.2
- Ubuntu = 18.04 and 20.04
- opencv-contrib-python==4.4.0.46
- opencv-python==4.4.0.46












input_stream = 'rtsp://'+username+':'+password+'@'+ip+'/user='+username+'_password='+password+'_channel=0channel_number_stream=0.sdp'
input_stream---> rtsp://admin:Admin123@192.168.1.208/user=admin_password=Admin123_channel=0channel_number_stream=0.sdp

input_stream---> rtsp://Admin:@192.168.1.209/user=Admin_password=_channel=0channel_number_stream=0.sdp

vs = cv2.VideoCapture(input_stream)
fps_rate = int(vs.get(cv2.CAP_PROP_FPS))
I have two IP camera which have two diffrent fps_rate(Camera 1 have 18000 and camera 2 have 20 fps)

video_file_name = 0
start_time = time.time()
while(True):
 ret, frame = vs.read()
 time.sleep(0.2) # <= Simulate processing time (mask detection, face detection and many detection is hapning)


 ### Start of writing a video to disk 
 minute = 5 ## saving a file for 5 minute only then saving another file for 5 min
 second = 60
 minite_to_save_video = int(minute) * int(second)


 # if we are supposed to be writing a video to disk, initialize
 if time.time() - start_time >= minite_to_save_video or video_file_name == 0 :
 ## where H = heigth, W = width, C = channel 
 H, W, C = frame.shape
 
 print('time.time()-->',time.time(),'video_file_name-->', video_file_name, ' #####')
 start_time = time.time()

 video_file_name = str(time.mktime(datetime.datetime.now().timetuple())).replace('.0', '')
 output_save_directory = output_stream+str(int(video_file_name))+'.mp4'


 fourcc = cv2.VideoWriter_fourcc(*'avc1')
 
 writer = cv2.VideoWriter(output_save_directory, fourcc,20.0,(W, H), True)

 # check to see if we should write the frame to disk
 if writer is not None:
 
 try:
 writer.write(frame)

 except Exception as e:
 print('Error in writing video output---> ', e)



-
ffmpeg stops capturing whole hour of HTTP stream after some time
7 juillet 2020, par CompuChipFirst of all, sorry if I'm using the wrong terminology. I've been playing around with nginx and I'm still a bit confused about RTMP and HLS and other acronyms.


I've managed to setup OBS to stream to an nginx server, which takes the RTMP stream and chops it into pieces for HLS. Here's the relevant part of the nginx configuration file.


rtmp {
 server {
 listen 1935;
 chunk_size 4000;
 ping 30s;
 deny play all;

 application live {
 live on;
 hls on;
 hls_nested on; # Create a new folder for each stream
 hls_path /mnt/hls/live;
 hls_fragment 3s;
 hls_fragment_naming timestamp;
 hls_playlist_length 60s;
 }
 }
}

http {
 server {
 listen 81 ssl;

 #creates the http-location for our full-resolution (desktop) HLS stream - "http://localhost:8080/live/test/index.m3u8"
 location /live {
 # Elided caching and CORS for brevity

 alias /mnt/hls/live;
 add_header Cache-Control no-cache;
 index index.m3u8;
 }
 }
}



This works well, I can view the stream in VLC or on a website and it looks smooth. Now I wanted to add some logging : I'd like to write full hours (starting at xx:00:00 and ending at xx:59:59) to a file named
log_yyyymmdd_hh.mp4
, e.g.log_20200707_18.mp4
for the files of 7 July 2020, 18:00 - 19:00 hrs. So I've set up an hourly cron job with the following ffmpeg command :

ffmpeg -i https://stream.example.com:81/live/<streamkey> -preset veryfast -maxrate 2000k \
 -bufsize 2000k -g 60 -t 3600 -y /var/video/log/$(date +\%Y\%m\%d_\%H00).mp4 >/dev/null 2>&1
</streamkey>


At first this seemed to work well, so I left it running happily for about 24 hours. When I checked, most of my hourly files were small ( 100MB) files of about 10 to 15 minutes long. It seems like any small delay in the stream will cause
ffmpeg
to stop writing to the file. I suspect such hiccups may for example be caused by an OBS plugin and I'll need to look into that, but I would prefer thatffmpeg
will retry for some time before giving up. What arguments should I be passing toffmpeg
to make it not break when the stream is down for, say, up to a second every now and then ?.

When I view back the HLS files there don't seem to be any noticeable gaps, so eventually all the data arrives. I went for the
crontab
solution withffmpeg
because when recording from nginx I could not figure out how to start recording at the start of the whole hour.

-
What am I doing wrong ? Tweepy with ffmpeg
27 août 2020, par pigeonburgerI'm trying to get this code to pull the media from any tweet that mentions my twitter handle, convert it using ffmpeg via the subprocess module, then send the converted media back to the person as a reply ? Is this all correct ?

I am also getting an error at
tweet_media = clean_data['entities']['media']['media_url']
and I don't understand what I'm doing wrong there (Exception has occurred : TypeError
list indices must be integers or slices, not str
line 32, in on_data
tweet_media = clean_data['entities']['media']['media_url'])

Also is there a better way to use ffmpeg with python that I am not aware of ?


Here is the code I wrote that I'm trying to use :


import tweepy
from tweepy import Stream
from tweepy.streaming import StreamListener
from datetime import datetime
import time
import subprocess

stdout = subprocess.PIPE
def runcmd(cmd):
 x = subprocess.Popen(cmd, stdout=subprocess.PIPE)
 return x.communicate(stdout)

print(" TWITTER BOT")
time.sleep(1.5)
print(" By PigeonBurger, updated 26 August 2020 \n")

import json
import random

class StdOutListener(StreamListener):
 def on_data(self, data):
 clean_data = json.loads(data)
 tweetId = clean_data['id']
 tweet_name = clean_data['user']['screen_name']
 tweet_media = clean_data['entities']['media']['media_url']
 tweet_photo = runcmd('ffmpeg -i tweet_media output.jpg')
 print(clean_data)
 tweet = 'Here ya go'
 now = datetime.now()
 dt_string = now.strftime("%d/%m/%Y %H:%M:%S")
 print(' Reply sent to @'+tweet_name, 'on', dt_string, '\n' ' Message:', tweet, '\n')
 respondToTweet(tweet_photo, tweet, tweetId)

def setUpAuth():
 auth = tweepy.OAuthHandler("consumer_key", "consumer_secret")
 auth.set_access_token("access_token", "access_token_secret")
 api = tweepy.API(auth)
 return api, auth

def followStream():
 api, auth = setUpAuth()
 listener = StdOutListener()
 stream = Stream(auth, listener)
 stream.filter(track=["@YOUR_TWITTER_HANDLE"], is_async=True)

def respondToTweet(tweet_photo, tweet, tweetId):
 api, auth = setUpAuth()
 api.update_with_media(tweet_photo, tweet, in_reply_to_status_id=tweetId, auto_populate_reply_metadata=True, stall_warnings=True)

if __name__ == "__main__":
 followStream()