This is a toturial to install systemc library in a linux machine. If you are using windows machine and don't have access to a linux machine, I suggest to use WSL.
function ray_casting(point, polygon){ | |
var n=polygon.length, | |
is_in=false, | |
x=point[0], | |
y=point[1], | |
x1,x2,y1,y2; | |
for(var i=0; i < n-1; ++i){ | |
x1=polygon[i][0]; | |
x2=polygon[i+1][0]; |
# IDA (disassembler) and Hex-Rays (decompiler) plugin for Apple AMX | |
# | |
# WIP research. (This was edited to add more info after someone posted it to | |
# Hacker News. Click "Revisions" to see full changes.) | |
# | |
# Copyright (c) 2020 dougallj | |
# Based on Python port of VMX intrinsics plugin: | |
# Copyright (c) 2019 w4kfu - Synacktiv |
**Note: In all below, slave can also mean interconnect
- Do we really need back-pressure?
- Yes, you absolutely need backpressure. What happens when two masters want to access the same slave? One has to be blocked for some period of time. Some slaves may only be able to handle a limited number of concurrent operations and take some time to produce a result. As such, backpressure is required.
- B and R channel backpressure is required in the case of contention towards the master. If a master makes burst read requests against two different slaves, one of them is gonna have to wait.
- Shouldn't a master be prepared to receive the responses for any requests it issues from the moment it makes the request? Aside from the clock crossing issue someone else brought up, and the interconnect issue at the heart of the use of IDs, why should an AXI master ever stall R or B channels?
- The master should be prepared, but it only has one R and one B input, so it can't re
FWIW: I (@rondy) am not the creator of the content shared here, which is an excerpt from Edmond Lau's book. I simply copied and pasted it from another location and saved it as a personal note, before it gained popularity on news.ycombinator.com. Unfortunately, I cannot recall the exact origin of the original source, nor was I able to find the author's name, so I am can't provide the appropriate credits.
- By Edmond Lau
- Highly Recommended 👍
- http://www.theeffectiveengineer.com/
A curated list of AWS resources to prepare for the AWS Certifications
A curated list of awesome AWS resources you need to prepare for the all 5 AWS Certifications. This gist will include: open source repos, blogs & blogposts, ebooks, PDF, whitepapers, video courses, free lecture, slides, sample test and many other resources.
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """ | |
import numpy as np | |
import cPickle as pickle | |
import gym | |
# hyperparameters | |
H = 200 # number of hidden layer neurons | |
batch_size = 10 # every how many episodes to do a param update? | |
learning_rate = 1e-4 | |
gamma = 0.99 # discount factor for reward |
https://devtalk.nvidia.com/default/topic/933827/cuda-programming-and-performance/fast-256-bin-histogram/ | |
http://www.cse.uconn.edu/~zshi/course/cse5302/ref/chhugani08sorting.pdf | |
http://link.springer.com/chapter/10.1007/978-3-642-23397-5_16 | |
http://arxiv.org/abs/1008.2849 Faster Radix Sort via Virtual Memory and Write-Combining Jan Wassenberg, Peter Sanders | |
https://devtalk.nvidia.com/default/topic/378826/cuda-programming-and-performance/my-speedy-sgemm/post/2703033/#2703033 | |
https://devtalk.nvidia.com/default/topic/390366/cuda-programming-and-performance/instruction-latency/post/2768197/#2768197 | |
https://devtalk.nvidia.com/default/topic/913832/cuda-programming-and-performance/sum-reduction-working-in-fermi-kepler-and-maxwell/ | |
https://devtalk.nvidia.com/default/topic/776043/cuda-programming-and-performance/whats-new-in-maxwell-sm_52-gtx-9xx-/1 | |
https://devtalk.nvidia.com/default/topic/690631/cuda-programming-and-performance/so-whats-new-about-maxwell-/post/4305310/#4305310 |