Skip to content

Instantly share code, notes, and snippets.

View Gateswong's full-sized avatar

Gates_ice Gateswong

View GitHub Profile
# README:
# Copy this file to /usr/lib/systemd/system/
# sudo systemctl daemon-reload
# systemctl enable ipython-notebook
# systemctl start ipython-notebook
# The WorkingDirectory and ipython-dir must exist
# If you don't want anything fancy, go to http://127.0.0.1:8888 to see your notebook
# wheneber you want it
[Unit]
// =========================================================
// FSM
// =========================================================
FSM_NULL = 0
FSM_LOGIN_SCREEN = 1
FSM_CHAR_SELECT = 2
FSM_CHAR_CREATE = 3
FSM_LOADING = 4
FSM_NEW_CHAR = 5
// =========================================================
// FSM
// =========================================================
FSM_NULL = 0
FSM_LOGIN_SCREEN = 1
FSM_CHAR_SELECT = 2
FSM_CHAR_CREATE = 3
FSM_LOADING = 4
FSM_NEW_CHAR = 5
// =====================================================
// GLOBAL VARIABLES
// =====================================================
// misc
Dim CURSOR_LOOT, LOOT_X, LOOT_Y, SCREEN_WIDTH, SCREEN_HEIGHT
CURSOR_LOOT = 1434778618
LOOT_X = -1
// =================================================
// ¶¨ÒåÔ¤Éè°´¼ü
// =================================================
Dim KEY_OPEN_BAG
Dim KEY_MOUNT
Dim KEY_TARGET_VENDOR
Dim KEY_INTERACT_TARGET
KEY_OPEN_BAG = "1"
#include "MPI.h" // 扣分点!
float a[200][200], b[200][200], PROD = 1;
int myrank, numproc;
MPI_Init();
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
MPI_Comm_size(MPI_COMM_WORLD, &numproc);
// MPI-Scatter
// ### Send and Receive
int MPI_Send(void *buf, int count, MPI_Datatype datatype,
int dest, int tag, MPI_Comm comm);
int MPI_Recv(void *buf, int count, MPI_Datatype datatype,
int source, int tag, MPI_Comm comm, MPI_Status *status);
// ### Size and Rank
int MPI_Comm_size(MPI_Comm comm, int *size);
int MPI_Comm_rank(MPI_Comm comm, int *rank);
@Gateswong
Gateswong / CS546-Lecture23.md
Created April 29, 2014 20:05
Design Parallel Algorithms (2)

Gauss Elimination

First let's make some assumptions:

one base op (+, *, -, /) takes 1 unit of time
hypercube SF (ts, tw)
    

HPF Data Mapping

Two-phase data mapping: ALIGN, DISTRUBUTE

Goals

Avoid contention: for rows data, the can be updated in parallel.

Locality of reference: two data always works on the same instructor.

#include <mpi.h>
main() {
int my_rank, num_proc;
int a, b, i; // it depends
}